选型手册:55-60GHz 低噪声/中等功率放大器—CHA2157-99F

CHA2157-99F

       CHA2157-99F 具有3.5 dB 的低噪声系数,如同一位勤劳的园丁,将噪声这颗杂草扼杀在摇篮中,保证信号的纯净度。15 dBm 的输出功率,如同一位慷慨的农夫,将充足的能量灌溉到每一片土地,让信号传递得更远更清晰。

工作频率范围: 55-60 GHz
增益: 8-12 dB (典型值 10 dB)
增益平坦度: ±1.0 dB (典型值)
噪声系数: 3.5-4.5 dB (典型值 3.5 dB)
输出功率 (1dB 压缩点): 13-15 dBm (典型值 15 dBm)
反向隔离: 20-25 dB (典型值)

CHA2080-98F
CHA3080-98F
CHA1077a98F
CHA1008-99F
CHA3090-98F
CHA7362- QWA
CHA4395-QDG
CHA4396-QDG
CHT3091-FAB
CHT3091a99F
CHT4660-QAG
CHT3091aQAG
CHT4660-FAB
CHT4690-FAB
CHT4690-QAG
CHT4690-99F
CHT4694-99F

========= 处理 osd.0 ========= marked out osd.0. Scheduled to stop osd.0 on host 'ceph3' Removed osd.0 from host 'ceph3' 将在 ceph3 擦除 /dev/ 并重新加入 root@ceph3's password: Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1755, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 171, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 462, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 107, in <lambda> wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731 File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 96, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/orchestrator/module.py", line 840, in _daemon_add_osd raise_if_exception(completion) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 228, in raise_if_exception raise e RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/mon.ceph3/config Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=192.168.88.240:5000/ceph/ceph@sha256:ab004174eb9b0b0968d9440e041e9d5d2b390ee49086240800c6359d4eac85ac -e NODE_NAME=ceph3 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f:/var/run/ceph:z -v /var/log/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f:/var/log/ceph:z -v /var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpdumuz8x5:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpdgsxevr6:/var/lib/ceph/bootstrap-osd/ceph.keyring:z 192.168.88.240:5000/ceph/ceph@sha256:ab004174eb9b0b0968d9440e041e9d5d2b390ee49086240800c6359d4eac85ac lvm batch --no-auto /dev/ --yes --no-systemd /usr/bin/podman: stderr stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected. /usr/bin/podman: stderr --> passed data devices: 0 physical, 1 LVM /usr/bin/podman: stderr --> relative data size: 1.0 /usr/bin/podman: stderr Traceback (most recent call last): /usr/bin/podman: stderr File "/usr/sbin/ceph-volume", line 11, in <module> /usr/bin/podman: stderr load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__ /usr/bin/podman: stderr self.main(self.argv) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc /usr/bin/podman: stderr return f(*a, **kw) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main /usr/bin/podman: stderr terminal.dispatch(self.mapper, subcommand_args) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch /usr/bin/podman: stderr instance.main() /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main /usr/bin/podman: stderr terminal.dispatch(self.mapper, self.argv) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch /usr/bin/podman: stderr instance.main() /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root /usr/bin/podman: stderr return func(*a, **kw) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 428, in main /usr/bin/podman: stderr plan = self.get_plan(self.args) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 466, in get_plan /usr/bin/podman: stderr args.wal_devices) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 486, in get_deployment_layout /usr/bin/podman: stderr plan.extend(get_lvm_osds(lvm_devs, args)) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 94, in get_lvm_osds /usr/bin/podman: stderr disk.Size(b=int(lv.lvs[0].lv_size)), /usr/bin/podman: stderr IndexError: list index out of range Traceback (most recent call last): File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 9468, in <module> main() File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 9456, in main r = ctx.func(ctx) File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 2083, in _infer_config return func(ctx) File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 1999, in _infer_fsid return func(ctx) File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 2111, in _infer_image return func(ctx) File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 1986, in _validate_fsid return func(ctx) File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 6093, in command_ceph_volume out, err, code = call_throws(ctx, c.run_cmd(), verbosity=CallVerbosity.QUIET_UNLESS_ERROR) File "/var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e", line 1788, in call_throws raise RuntimeError('Failed command: %s' % ' '.join(command)) RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=192.168.88.240:5000/ceph/ceph@sha256:ab004174eb9b0b0968d9440e041e9d5d2b390ee49086240800c6359d4eac85ac -e NODE_NAME=ceph3 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f:/var/run/ceph:z -v /var/log/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f:/var/log/ceph:z -v /var/lib/ceph/ebc4adc6-99ea-11f0-a03e-525400121a4f/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpdumuz8x5:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpdgsxevr6:/var/lib/ceph/bootstrap-osd/ceph.keyring:z 192.168.88.240:5000/ceph/ceph@sha256:ab004174eb9b0b0968d9440e041e9d5d2b390ee49086240800c6359d4eac85ac lvm batch --no-auto /dev/ --yes --no-systemd
09-26
===== TEST FINISHED OK ===== [dragon@localhost ova_temp]$ export LIBGUESTFS_BACKEND=direct [dragon@localhost ova_temp]$ export LIBGUESTFS_DEBUG=1 [dragon@localhost ova_temp]$ sudo -E virt-v2v -i disk win2003-disk1.vmdk -o local -os ~/vms/win2003/ libguestfs: create: flags = 0, handle = 0x5628d0bb1710, program = virt-v2v libguestfs: create: flags = 0, handle = 0x5628d0bb1da0, program = virt-v2v [ 0.0] Opening the source -i disk win2003-disk1.vmdk libguestfs: create: flags = 0, handle = 0x5628d0bb43f0, program = virt-v2v libguestfs: command: run: qemu-img --help | grep -sqE -- '\binfo\b.*-U\b' libguestfs: command: run: qemu-img libguestfs: command: run: \ info libguestfs: command: run: \ -U libguestfs: command: run: \ --output json libguestfs: command: run: \ ./win2003-disk1.vmdk libguestfs: parse_json: qemu-img info JSON output:\n{\n "virtual-size": 42949672960,\n "filename": "./win2003-disk1.vmdk",\n "cluster-size": 65536,\n "format": "vmdk",\n "actual-size": 2682425344,\n "format-specific": {\n "type": "vmdk",\n "data": {\n "cid": 2370822867,\n "parent-cid": 4294967295,\n "create-type": "streamOptimized",\n "extents": [\n {\n "compressed": true,\n "virtual-size": 42949672960,\n "filename": "./win2003-disk1.vmdk",\n "cluster-size": 65536,\n "format": ""\n }\n ]\n }\n },\n "dirty-flag": false\n}\n\n [ 0.0] Creating an overlay to protect the source from being modified libguestfs: create: flags = 0, handle = 0x5628d0bb5ab0, program = virt-v2v libguestfs: command: run: qemu-img --help | grep -sqE -- '\binfo\b.*-U\b' libguestfs: command: run: qemu-img libguestfs: command: run: \ info libguestfs: command: run: \ -U libguestfs: command: run: \ --output json libguestfs: command: run: \ /var/tmp/v2vovl4d77de.qcow2 libguestfs: parse_json: qemu-img info JSON output:\n{\n "backing-filename-format": "vmdk",\n "virtual-size": 42949672960,\n "filename": "/var/tmp/v2vovl4d77de.qcow2",\n "cluster-size": 65536,\n "format": "qcow2",\n "actual-size": 200704,\n "format-specific": {\n "type": "qcow2",\n "data": {\n "compat": "1.1",\n "lazy-refcounts": false,\n "refcount-bits": 16,\n "corrupt": false\n }\n },\n "full-backing-filename": "/home/dragon/ova_temp/win2003-disk1.vmdk",\n "backing-filename": "/home/dragon/ova_temp/win2003-disk1.vmdk",\n "dirty-flag": false\n}\n\n libguestfs: create: flags = 0, handle = 0x5628d0bb72d0, program = virt-v2v libguestfs: command: run: qemu-img --help | grep -sqE -- '\binfo\b.*-U\b' libguestfs: command: run: qemu-img libguestfs: command: run: \ info libguestfs: command: run: \ -U libguestfs: command: run: \ --output json libguestfs: command: run: \ /var/tmp/v2vovl4d77de.qcow2 libguestfs: parse_json: qemu-img info JSON output:\n{\n "backing-filename-format": "vmdk",\n "virtual-size": 42949672960,\n "filename": "/var/tmp/v2vovl4d77de.qcow2",\n "cluster-size": 65536,\n "format": "qcow2",\n "actual-size": 200704,\n "format-specific": {\n "type": "qcow2",\n "data": {\n "compat": "1.1",\n "lazy-refcounts": false,\n "refcount-bits": 16,\n "corrupt": false\n }\n },\n "full-backing-filename": "/home/dragon/ova_temp/win2003-disk1.vmdk",\n "backing-filename": "/home/dragon/ova_temp/win2003-disk1.vmdk",\n "dirty-flag": false\n}\n\n [ 0.1] Opening the overlay libguestfs: create: flags = 0, handle = 0x5628d0bb99f0, program = virt-v2v libguestfs: launch: program=virt-v2v libguestfs: launch: identifier=v2v libguestfs: launch: version=1.40.2libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=direct libguestfs: launch: tmpdir=/tmp/libguestfssJzX3j libguestfs: launch: umask=0022 libguestfs: launch: euid=0 libguestfs: begin building supermin appliance libguestfs: run supermin libguestfs: command: run: /usr/bin/supermin libguestfs: command: run: \ --build libguestfs: command: run: \ --verbose libguestfs: command: run: \ --if-newer libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock libguestfs: command: run: \ --copy-kernel libguestfs: command: run: \ -f ext2 libguestfs: command: run: \ --host-cpu x86_64 libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d supermin: version: 5.1.19 supermin: rpm: detected RPM version 4.15 supermin: package handler: fedora/rpm supermin: acquiring lock on /var/tmp/.guestfs-0/lock supermin: if-newer: output does not need rebuilding libguestfs: finished building supermin appliance libguestfs: begin testing qemu features libguestfs: checking for previously cached test results of /usr/bin/qemu-kvm, in /var/tmp/.guestfs-0 libguestfs: loading previously cached test results libguestfs: qemu version: 4.1 libguestfs: qemu mandatory locking: yes libguestfs: qemu KVM: enabled libguestfs: finished testing qemu features /usr/bin/qemu-kvm \ -global virtio-blk-pci.scsi=off \ -no-user-config \ -enable-fips \ -nodefaults \ -display none \ -machine accel=kvm:tcg \ -cpu host \ -m 2150 \ -no-reboot \ -rtc driftfix=slew \ -no-hpet \ -global kvm-pit.lost_tick_policy=discard \ -kernel /var/tmp/.guestfs-0/appliance.d/kernel \ -initrd /var/tmp/.guestfs-0/appliance.d/initrd \ -object rng-random,filename=/dev/urandom,id=rng0 \ -device virtio-rng-pci,rng=rng0 \ -device virtio-scsi-pci,id=scsi \ -drive file=/var/tmp/v2vovl4d77de.qcow2,cache=unsafe,format=qcow2,copy-on-read=on,discard=unmap,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \ -device scsi-hd,drive=appliance \ -device virtio-serial-pci \ -serial stdio \ -device sga \ -chardev socket,path=/tmp/libguestfsMJl3al/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -netdev user,id=usernet,net=169.254.0.0/16 \ -device virtio-net-pci,netdev=usernet \ -append "panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 guestfs_network=1 TERM=xterm guestfs_identifier=v2v" qemu-kvm: Parameter 'type' expects a netdev backend type libguestfs: child_cleanup: 0x5628d0bb99f0: child process died libguestfs: sending SIGTERM to process 222751 virt-v2v: error: libguestfs error: guestfs_launch failed, see earlier error messages If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] libguestfs: closing guestfs handle 0x5628d0bb99f0 (state 0) libguestfs: command: run: rm libguestfs: command: run: \ -rf /tmp/libguestfssJzX3j libguestfs: command: run: rm libguestfs: command: run: \ -rf /tmp/libguestfsMJl3al libguestfs: closing guestfs handle 0x5628d0bb72d0 (state 0) libguestfs: closing guestfs handle 0x5628d0bb5ab0 (state 0) libguestfs: closing guestfs handle 0x5628d0bb43f0 (state 0) libguestfs: closing guestfs handle 0x5628d0bb1da0 (state 0) libguestfs: closing guestfs handle 0x5628d0bb1710 (state 0) [dragon@localhost ova_temp]$
11-04
[root@hadoop01 apache-hive-3.1.3-bin]# bin/hive which: no hbase in (/export/servers/hadoop-3.3.5/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin::/export/servers/apache-hive-3.1.3-bin/bin:/export/servers/flume-1.9.0/bin:/export/servers/flume-1.9.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/export/servers/jdk1.8.0_161/bin:/export/servers/hadoop-3.3.5/bin:/export/servers/hadoop-3.3.5/sbin:/export/servers/scala-2.12.10/bin:/root/bin) 2025-06-17 19:17:36,429 INFO conf.HiveConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:39,510 INFO SessionState: Hive Session ID = 174edc05-5da2-4b28-a455-d29ac7bdd8fc Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:39,639 INFO SessionState: Logging initialized using configuration in jar:file:/export/servers/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true 2025-06-17 19:17:42,218 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,262 INFO session.SessionState: Created local directory: /tmp/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,274 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/174edc05-5da2-4b28-a455-d29ac7bdd8fc/_tmp_space.db 2025-06-17 19:17:42,304 INFO conf.HiveConf: Using the default value passed in for log id: 174edc05-5da2-4b28-a455-d29ac7bdd8fc 2025-06-17 19:17:42,304 INFO session.SessionState: Updating thread name to 174edc05-5da2-4b28-a455-d29ac7bdd8fc main 2025-06-17 19:17:44,440 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:44,512 WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 2025-06-17 19:17:44,530 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:44,532 INFO conf.MetastoreConf: Found configuration file file:/export/servers/apache-hive-3.1.3-bin/conf/hive-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Unable to find config file hivemetastore-site.xml 2025-06-17 19:17:44,534 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Unable to find config file metastore-site.xml 2025-06-17 19:17:44,535 INFO conf.MetastoreConf: Found configuration file null 2025-06-17 19:17:44,958 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2025-06-17 19:17:45,507 INFO hikari.HikariDataSource: HikariPool-1 - Starting... 2025-06-17 19:17:46,120 INFO hikari.HikariDataSource: HikariPool-1 - Start completed. 2025-06-17 19:17:46,244 INFO hikari.HikariDataSource: HikariPool-2 - Starting... 2025-06-17 19:17:46,262 INFO hikari.HikariDataSource: HikariPool-2 - Start completed. 2025-06-17 19:17:46,944 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2025-06-17 19:17:47,177 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:47,179 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:47,661 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,662 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,664 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:47,665 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,340 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,341 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:51,342 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0 2025-06-17 19:17:56,088 WARN metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore root@192.168.245.131 2025-06-17 19:17:56,489 INFO metastore.HiveMetaStore: Added admin role in metastore 2025-06-17 19:17:56,497 INFO metastore.HiveMetaStore: Added public role in metastore 2025-06-17 19:17:56,607 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 2025-06-17 19:17:56,969 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2025-06-17 19:17:57,003 INFO metastore.HiveMetaStore: 0: get_all_functions 2025-06-17 19:17:57,011 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. 2025-06-17 19:17:57,170 INFO CliDriver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,170 INFO SessionState: Hive Session ID = 2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,216 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,222 INFO session.SessionState: Created local directory: /tmp/root/2804a115-a38a-441b-9d42-6abad28c99f8 2025-06-17 19:17:57,228 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/2804a115-a38a-441b-9d42-6abad28c99f8/_tmp_space.db 2025-06-17 19:17:57,231 INFO metastore.HiveMetaStore: 1: get_databases: @hive# 2025-06-17 19:17:57,231 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive# 2025-06-17 19:17:57,233 INFO metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2025-06-17 19:17:57,239 INFO metastore.ObjectStore: ObjectStore, initialize called 2025-06-17 19:17:57,272 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL 2025-06-17 19:17:57,272 INFO metastore.ObjectStore: Initialized ObjectStore 2025-06-17 19:17:57,288 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,288 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#db_hive1 pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,314 INFO metastore.HiveMetaStore: 1: get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,314 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=db_hive1 tbls= 2025-06-17 19:17:57,316 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,316 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,321 INFO metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2025-06-17 19:17:57,321 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2025-06-17 19:17:57,322 INFO metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,322 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#itcast_ods pat=.*,type=MATERIALIZED_VIEW 2025-06-17 19:17:57,326 INFO metastore.HiveMetaStore: 1: get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=itcast_ods tbls= 2025-06-17 19:17:57,326 INFO metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>
06-18
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值