Oct 9 01:03:44.877306 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:03:44.877341 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:03:44.877352 kernel: BIOS-provided physical RAM map: Oct 9 01:03:44.877359 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 01:03:44.877365 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 01:03:44.877371 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 01:03:44.877378 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 01:03:44.877384 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 01:03:44.877391 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 01:03:44.877397 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 01:03:44.877406 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 9 01:03:44.877412 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 01:03:44.877419 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 01:03:44.877425 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 01:03:44.877433 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 01:03:44.877439 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 01:03:44.877449 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 01:03:44.877455 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:03:44.877462 kernel: BIOS-e820: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 01:03:44.877468 kernel: NX (Execute Disable) protection: active Oct 9 01:03:44.877475 kernel: APIC: Static calls initialized Oct 9 01:03:44.877482 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 01:03:44.877489 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 01:03:44.877495 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 01:03:44.877502 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 01:03:44.877508 kernel: extended physical RAM map: Oct 9 01:03:44.877515 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 01:03:44.877524 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 01:03:44.877531 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 01:03:44.877537 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 01:03:44.877544 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 01:03:44.877551 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 01:03:44.877557 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 01:03:44.877564 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b62e017] usable Oct 9 01:03:44.877571 kernel: reserve setup_data: [mem 0x000000009b62e018-0x000000009b66ae57] usable Oct 9 01:03:44.877577 kernel: reserve setup_data: [mem 0x000000009b66ae58-0x000000009b66b017] usable Oct 9 01:03:44.877584 kernel: reserve setup_data: [mem 0x000000009b66b018-0x000000009b674c57] usable Oct 9 01:03:44.877591 kernel: reserve setup_data: [mem 0x000000009b674c58-0x000000009c8eefff] usable Oct 9 01:03:44.877600 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 01:03:44.877606 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 01:03:44.877617 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 01:03:44.877624 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 01:03:44.877630 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 01:03:44.877637 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 01:03:44.877646 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:03:44.877654 kernel: reserve setup_data: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 01:03:44.877660 kernel: efi: EFI v2.7 by EDK II Oct 9 01:03:44.877668 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b6b3018 RNG=0x9cb73018 Oct 9 01:03:44.877675 kernel: random: crng init done Oct 9 01:03:44.877682 kernel: efi: Remove mem127: MMIO range=[0xffe00000-0xffffffff] (2MB) from e820 map Oct 9 01:03:44.877689 kernel: e820: remove [mem 0xffe00000-0xffffffff] reserved Oct 9 01:03:44.877696 kernel: secureboot: Secure boot disabled Oct 9 01:03:44.877703 kernel: SMBIOS 2.8 present. Oct 9 01:03:44.877710 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 9 01:03:44.877719 kernel: Hypervisor detected: KVM Oct 9 01:03:44.877726 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:03:44.877733 kernel: kvm-clock: using sched offset of 4544252319 cycles Oct 9 01:03:44.877740 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:03:44.877748 kernel: tsc: Detected 2794.750 MHz processor Oct 9 01:03:44.877755 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:03:44.877762 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:03:44.877770 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 9 01:03:44.877777 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 9 01:03:44.877784 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:03:44.877793 kernel: Using GB pages for direct mapping Oct 9 01:03:44.877800 kernel: ACPI: Early table checksum verification disabled Oct 9 01:03:44.877808 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 9 01:03:44.877815 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 9 01:03:44.877822 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:03:44.877830 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:03:44.877857 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 9 01:03:44.877865 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:03:44.877872 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:03:44.877882 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:03:44.877889 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:03:44.877896 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 01:03:44.877903 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 9 01:03:44.877911 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 9 01:03:44.877918 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 9 01:03:44.877925 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 9 01:03:44.877932 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 9 01:03:44.877941 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 9 01:03:44.877948 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 9 01:03:44.877955 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 9 01:03:44.877962 kernel: No NUMA configuration found Oct 9 01:03:44.877969 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 9 01:03:44.877976 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 9 01:03:44.877983 kernel: Zone ranges: Oct 9 01:03:44.877990 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:03:44.877997 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 9 01:03:44.878004 kernel: Normal empty Oct 9 01:03:44.878014 kernel: Movable zone start for each node Oct 9 01:03:44.878021 kernel: Early memory node ranges Oct 9 01:03:44.878028 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 9 01:03:44.878035 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 9 01:03:44.878043 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 9 01:03:44.878052 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 9 01:03:44.878061 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 9 01:03:44.878069 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 9 01:03:44.878078 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 9 01:03:44.878090 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:03:44.878099 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 9 01:03:44.878107 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 9 01:03:44.878116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:03:44.878125 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 9 01:03:44.878134 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 9 01:03:44.878142 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 9 01:03:44.878151 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 01:03:44.878160 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:03:44.878171 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 01:03:44.878180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 01:03:44.878189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:03:44.878196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:03:44.878203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:03:44.878210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:03:44.878217 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:03:44.878224 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:03:44.878231 kernel: TSC deadline timer available Oct 9 01:03:44.878247 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 01:03:44.878255 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:03:44.878262 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 01:03:44.878272 kernel: kvm-guest: setup PV sched yield Oct 9 01:03:44.878279 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 9 01:03:44.878286 kernel: Booting paravirtualized kernel on KVM Oct 9 01:03:44.878294 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:03:44.878301 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 01:03:44.878308 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 01:03:44.878323 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 01:03:44.878333 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 01:03:44.878340 kernel: kvm-guest: PV spinlocks enabled Oct 9 01:03:44.878348 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 01:03:44.878356 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:03:44.878364 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:03:44.878372 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:03:44.878381 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:03:44.878389 kernel: Fallback order for Node 0: 0 Oct 9 01:03:44.878396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 9 01:03:44.878404 kernel: Policy zone: DMA32 Oct 9 01:03:44.878411 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:03:44.878419 kernel: Memory: 2395860K/2567000K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 170884K reserved, 0K cma-reserved) Oct 9 01:03:44.878427 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 01:03:44.878434 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:03:44.878442 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:03:44.878452 kernel: Dynamic Preempt: voluntary Oct 9 01:03:44.878459 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:03:44.878467 kernel: rcu: RCU event tracing is enabled. Oct 9 01:03:44.878475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 01:03:44.878483 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:03:44.878490 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:03:44.878498 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:03:44.878505 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:03:44.878513 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 01:03:44.878520 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 01:03:44.878530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:03:44.878537 kernel: Console: colour dummy device 80x25 Oct 9 01:03:44.878545 kernel: printk: console [ttyS0] enabled Oct 9 01:03:44.878552 kernel: ACPI: Core revision 20230628 Oct 9 01:03:44.878560 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 01:03:44.878567 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:03:44.878574 kernel: x2apic enabled Oct 9 01:03:44.878582 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:03:44.878589 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 01:03:44.878599 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 01:03:44.878606 kernel: kvm-guest: setup PV IPIs Oct 9 01:03:44.878614 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 01:03:44.878621 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 01:03:44.878629 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 9 01:03:44.878636 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 01:03:44.878643 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 01:03:44.878651 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 01:03:44.878658 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:03:44.878668 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:03:44.878675 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:03:44.878682 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:03:44.878690 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 01:03:44.878697 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 01:03:44.878705 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 01:03:44.878712 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 01:03:44.878720 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 01:03:44.878730 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 01:03:44.878737 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 01:03:44.878745 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:03:44.878752 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:03:44.878760 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:03:44.878767 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:03:44.878775 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 01:03:44.878782 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:03:44.878789 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:03:44.878799 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:03:44.878806 kernel: landlock: Up and running. Oct 9 01:03:44.878813 kernel: SELinux: Initializing. Oct 9 01:03:44.878821 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:03:44.878828 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:03:44.878855 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 01:03:44.878862 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:03:44.878870 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:03:44.878877 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:03:44.878887 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 01:03:44.878895 kernel: ... version: 0 Oct 9 01:03:44.878902 kernel: ... bit width: 48 Oct 9 01:03:44.878909 kernel: ... generic registers: 6 Oct 9 01:03:44.878917 kernel: ... value mask: 0000ffffffffffff Oct 9 01:03:44.878924 kernel: ... max period: 00007fffffffffff Oct 9 01:03:44.878931 kernel: ... fixed-purpose events: 0 Oct 9 01:03:44.878939 kernel: ... event mask: 000000000000003f Oct 9 01:03:44.878946 kernel: signal: max sigframe size: 1776 Oct 9 01:03:44.878956 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:03:44.878963 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:03:44.878970 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:03:44.878978 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:03:44.878985 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 01:03:44.878992 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 01:03:44.879000 kernel: smpboot: Max logical packages: 1 Oct 9 01:03:44.879007 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 9 01:03:44.879014 kernel: devtmpfs: initialized Oct 9 01:03:44.879024 kernel: x86/mm: Memory block size: 128MB Oct 9 01:03:44.879031 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 9 01:03:44.879039 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 9 01:03:44.879046 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 9 01:03:44.879054 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 9 01:03:44.879061 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 9 01:03:44.879069 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:03:44.879076 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 01:03:44.879083 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:03:44.879093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:03:44.879100 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:03:44.879108 kernel: audit: type=2000 audit(1728435825.523:1): state=initialized audit_enabled=0 res=1 Oct 9 01:03:44.879116 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:03:44.879125 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:03:44.879134 kernel: cpuidle: using governor menu Oct 9 01:03:44.879143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:03:44.879153 kernel: dca service started, version 1.12.1 Oct 9 01:03:44.879162 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 01:03:44.879174 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 01:03:44.879183 kernel: PCI: Using configuration type 1 for base access Oct 9 01:03:44.879193 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:03:44.879202 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:03:44.879211 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:03:44.879220 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:03:44.879229 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:03:44.879239 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:03:44.879248 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:03:44.879260 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:03:44.879269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:03:44.879278 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:03:44.879288 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:03:44.879297 kernel: ACPI: Interpreter enabled Oct 9 01:03:44.879306 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 01:03:44.879322 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:03:44.879332 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:03:44.879341 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:03:44.879353 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 01:03:44.879360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:03:44.879531 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:03:44.879659 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 01:03:44.879780 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 01:03:44.879790 kernel: PCI host bridge to bus 0000:00 Oct 9 01:03:44.879939 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:03:44.880060 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:03:44.880196 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:03:44.880304 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 01:03:44.880485 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 01:03:44.880650 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 9 01:03:44.880761 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:03:44.880913 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 01:03:44.881048 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 01:03:44.881174 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 9 01:03:44.881291 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 9 01:03:44.881422 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 9 01:03:44.881540 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 9 01:03:44.881659 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:03:44.881884 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 01:03:44.882033 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 9 01:03:44.882213 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 9 01:03:44.882374 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 9 01:03:44.882511 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 01:03:44.882654 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 9 01:03:44.882797 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 9 01:03:44.882959 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 9 01:03:44.883100 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 01:03:44.883241 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 9 01:03:44.883408 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 9 01:03:44.883555 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 9 01:03:44.883700 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 9 01:03:44.883872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 01:03:44.884000 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 01:03:44.884130 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 01:03:44.884248 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 9 01:03:44.884387 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 9 01:03:44.884515 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 01:03:44.884634 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 9 01:03:44.884648 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:03:44.884656 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:03:44.884663 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:03:44.884671 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:03:44.884678 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 01:03:44.884686 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 01:03:44.884693 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 01:03:44.884700 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 01:03:44.884708 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 01:03:44.884718 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 01:03:44.884725 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 01:03:44.884733 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 01:03:44.884740 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 01:03:44.884748 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 01:03:44.884755 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 01:03:44.884763 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 01:03:44.884770 kernel: iommu: Default domain type: Translated Oct 9 01:03:44.884778 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:03:44.884787 kernel: efivars: Registered efivars operations Oct 9 01:03:44.884795 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:03:44.884802 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:03:44.884810 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 9 01:03:44.884817 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 9 01:03:44.884824 kernel: e820: reserve RAM buffer [mem 0x9b62e018-0x9bffffff] Oct 9 01:03:44.884843 kernel: e820: reserve RAM buffer [mem 0x9b66b018-0x9bffffff] Oct 9 01:03:44.884858 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 9 01:03:44.884891 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 9 01:03:44.885040 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 01:03:44.885169 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 01:03:44.885326 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:03:44.885340 kernel: vgaarb: loaded Oct 9 01:03:44.885349 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 01:03:44.885358 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 01:03:44.885366 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:03:44.885374 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:03:44.885385 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:03:44.885393 kernel: pnp: PnP ACPI init Oct 9 01:03:44.885526 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 01:03:44.885537 kernel: pnp: PnP ACPI: found 6 devices Oct 9 01:03:44.885545 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:03:44.885553 kernel: NET: Registered PF_INET protocol family Oct 9 01:03:44.885561 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:03:44.885569 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:03:44.885576 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:03:44.885588 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:03:44.885595 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:03:44.885603 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:03:44.885611 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:03:44.885618 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:03:44.885626 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:03:44.885633 kernel: NET: Registered PF_XDP protocol family Oct 9 01:03:44.885756 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 9 01:03:44.885912 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 9 01:03:44.886025 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:03:44.886137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:03:44.886248 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:03:44.886366 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 01:03:44.886474 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 01:03:44.886581 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 9 01:03:44.886591 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:03:44.886603 kernel: Initialise system trusted keyrings Oct 9 01:03:44.886627 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:03:44.886637 kernel: Key type asymmetric registered Oct 9 01:03:44.886645 kernel: Asymmetric key parser 'x509' registered Oct 9 01:03:44.886653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:03:44.886661 kernel: io scheduler mq-deadline registered Oct 9 01:03:44.886669 kernel: io scheduler kyber registered Oct 9 01:03:44.886677 kernel: io scheduler bfq registered Oct 9 01:03:44.886685 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:03:44.886695 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 01:03:44.886704 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 01:03:44.886712 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 01:03:44.886720 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:03:44.886728 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:03:44.886736 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:03:44.886744 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:03:44.886752 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:03:44.886760 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:03:44.886966 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 01:03:44.887081 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 01:03:44.887192 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T01:03:44 UTC (1728435824) Oct 9 01:03:44.887303 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 01:03:44.887320 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 01:03:44.887329 kernel: efifb: probing for efifb Oct 9 01:03:44.887336 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 9 01:03:44.887344 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 9 01:03:44.887356 kernel: efifb: scrolling: redraw Oct 9 01:03:44.887364 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 9 01:03:44.887372 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 01:03:44.887380 kernel: fb0: EFI VGA frame buffer device Oct 9 01:03:44.887390 kernel: pstore: Using crash dump compression: deflate Oct 9 01:03:44.887398 kernel: pstore: Registered efi_pstore as persistent store backend Oct 9 01:03:44.887408 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:03:44.887416 kernel: Segment Routing with IPv6 Oct 9 01:03:44.887424 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:03:44.887431 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:03:44.887439 kernel: Key type dns_resolver registered Oct 9 01:03:44.887447 kernel: IPI shorthand broadcast: enabled Oct 9 01:03:44.887455 kernel: sched_clock: Marking stable (573001864, 150682935)->(773244934, -49560135) Oct 9 01:03:44.887463 kernel: registered taskstats version 1 Oct 9 01:03:44.887473 kernel: Loading compiled-in X.509 certificates Oct 9 01:03:44.887483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:03:44.887491 kernel: Key type .fscrypt registered Oct 9 01:03:44.887499 kernel: Key type fscrypt-provisioning registered Oct 9 01:03:44.887506 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:03:44.887514 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:03:44.887522 kernel: ima: No architecture policies found Oct 9 01:03:44.887530 kernel: clk: Disabling unused clocks Oct 9 01:03:44.887538 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:03:44.887548 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:03:44.887556 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:03:44.887564 kernel: Run /init as init process Oct 9 01:03:44.887572 kernel: with arguments: Oct 9 01:03:44.887579 kernel: /init Oct 9 01:03:44.887587 kernel: with environment: Oct 9 01:03:44.887595 kernel: HOME=/ Oct 9 01:03:44.887602 kernel: TERM=linux Oct 9 01:03:44.887610 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:03:44.887620 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:03:44.887633 systemd[1]: Detected virtualization kvm. Oct 9 01:03:44.887641 systemd[1]: Detected architecture x86-64. Oct 9 01:03:44.887649 systemd[1]: Running in initrd. Oct 9 01:03:44.887657 systemd[1]: No hostname configured, using default hostname. Oct 9 01:03:44.887665 systemd[1]: Hostname set to . Oct 9 01:03:44.887674 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:03:44.887682 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:03:44.887693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:03:44.887701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:03:44.887710 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:03:44.887719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:03:44.887727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:03:44.887736 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:03:44.887746 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:03:44.887757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:03:44.887766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:03:44.887774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:03:44.887782 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:03:44.887791 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:03:44.887799 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:03:44.887807 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:03:44.887815 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:03:44.887826 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:03:44.887856 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:03:44.887864 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:03:44.887873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:03:44.887881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:03:44.887889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:03:44.887897 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:03:44.887905 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:03:44.887916 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:03:44.887925 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:03:44.887933 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:03:44.887941 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:03:44.887949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:03:44.887958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:44.887966 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:03:44.887974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:03:44.887982 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:03:44.888011 systemd-journald[192]: Collecting audit messages is disabled. Oct 9 01:03:44.888035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:03:44.888044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:03:44.888052 systemd-journald[192]: Journal started Oct 9 01:03:44.888070 systemd-journald[192]: Runtime Journal (/run/log/journal/9e47ee7bb27241fcbe398daf874fd03b) is 6.0M, max 48.3M, 42.2M free. Oct 9 01:03:44.882232 systemd-modules-load[193]: Inserted module 'overlay' Oct 9 01:03:44.899001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:03:44.899859 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:03:44.901205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:44.904181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:03:44.908643 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:03:44.912812 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:03:44.910955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:03:44.914576 kernel: Bridge firewalling registered Oct 9 01:03:44.914517 systemd-modules-load[193]: Inserted module 'br_netfilter' Oct 9 01:03:44.915624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:03:44.918129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:03:44.925483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:03:44.930640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:03:44.933379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:03:44.939966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:03:44.943570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:03:44.953094 dracut-cmdline[226]: dracut-dracut-053 Oct 9 01:03:44.956093 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:03:44.988097 systemd-resolved[230]: Positive Trust Anchors: Oct 9 01:03:44.988114 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:03:44.988152 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:03:44.999667 systemd-resolved[230]: Defaulting to hostname 'linux'. Oct 9 01:03:45.001656 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:03:45.002146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:03:45.044861 kernel: SCSI subsystem initialized Oct 9 01:03:45.053859 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:03:45.064866 kernel: iscsi: registered transport (tcp) Oct 9 01:03:45.085220 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:03:45.085247 kernel: QLogic iSCSI HBA Driver Oct 9 01:03:45.129490 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:03:45.140950 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:03:45.166934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:03:45.166970 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:03:45.167977 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:03:45.208870 kernel: raid6: avx2x4 gen() 29274 MB/s Oct 9 01:03:45.225855 kernel: raid6: avx2x2 gen() 29772 MB/s Oct 9 01:03:45.242960 kernel: raid6: avx2x1 gen() 24871 MB/s Oct 9 01:03:45.242980 kernel: raid6: using algorithm avx2x2 gen() 29772 MB/s Oct 9 01:03:45.260971 kernel: raid6: .... xor() 19566 MB/s, rmw enabled Oct 9 01:03:45.260995 kernel: raid6: using avx2x2 recovery algorithm Oct 9 01:03:45.281868 kernel: xor: automatically using best checksumming function avx Oct 9 01:03:45.432866 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:03:45.444632 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:03:45.462036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:03:45.473729 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 9 01:03:45.478222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:03:45.483971 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:03:45.504911 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Oct 9 01:03:45.541095 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:03:45.559109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:03:45.622301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:03:45.630963 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:03:45.645708 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:03:45.648168 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:03:45.650823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:03:45.652104 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:03:45.657863 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 01:03:45.658973 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:03:45.662979 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 01:03:45.670055 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:03:45.670088 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:03:45.670100 kernel: GPT:9289727 != 19775487 Oct 9 01:03:45.670117 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:03:45.670130 kernel: GPT:9289727 != 19775487 Oct 9 01:03:45.670140 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:03:45.670030 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:03:45.678448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:03:45.682861 kernel: libata version 3.00 loaded. Oct 9 01:03:45.692869 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:03:45.692896 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 01:03:45.693078 kernel: AES CTR mode by8 optimization enabled Oct 9 01:03:45.693884 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 01:03:45.695974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:03:45.697745 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 01:03:45.697966 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 01:03:45.696263 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:03:45.702012 kernel: scsi host0: ahci Oct 9 01:03:45.702204 kernel: scsi host1: ahci Oct 9 01:03:45.702402 kernel: scsi host2: ahci Oct 9 01:03:45.702544 kernel: scsi host3: ahci Oct 9 01:03:45.702687 kernel: scsi host4: ahci Oct 9 01:03:45.708156 kernel: scsi host5: ahci Oct 9 01:03:45.708399 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Oct 9 01:03:45.708413 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Oct 9 01:03:45.708423 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Oct 9 01:03:45.708433 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Oct 9 01:03:45.708443 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Oct 9 01:03:45.709048 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Oct 9 01:03:45.710322 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:03:45.718433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:03:45.718659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:45.725683 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Oct 9 01:03:45.725709 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (458) Oct 9 01:03:45.723436 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:45.730312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:45.744929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:03:45.745563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:45.753318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:03:45.765562 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:03:45.765920 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:03:45.774072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:03:45.791001 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:03:45.791258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:03:45.791319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:45.793718 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:45.794686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:45.810607 disk-uuid[557]: Primary Header is updated. Oct 9 01:03:45.810607 disk-uuid[557]: Secondary Entries is updated. Oct 9 01:03:45.810607 disk-uuid[557]: Secondary Header is updated. Oct 9 01:03:45.815342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:03:45.811186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:45.817993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:03:45.818979 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:03:45.822861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:03:45.840187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:03:46.022054 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 01:03:46.022116 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 01:03:46.022128 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 01:03:46.022866 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 01:03:46.023854 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 01:03:46.024861 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 01:03:46.025860 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 01:03:46.025872 kernel: ata3.00: applying bridge limits Oct 9 01:03:46.026878 kernel: ata3.00: configured for UDMA/100 Oct 9 01:03:46.028851 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 01:03:46.083393 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 01:03:46.083632 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 01:03:46.100872 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 01:03:46.822869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:03:46.822922 disk-uuid[559]: The operation has completed successfully. Oct 9 01:03:46.853633 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:03:46.853753 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:03:46.878965 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:03:46.882152 sh[599]: Success Oct 9 01:03:46.893861 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 01:03:46.924625 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:03:46.939223 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:03:46.942108 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:03:46.952873 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:03:46.952901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:03:46.952912 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:03:46.955229 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:03:46.955243 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:03:46.959396 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:03:46.961706 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:03:46.973941 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:03:46.976487 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:03:46.984180 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:03:46.984214 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:03:46.984227 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:03:46.986872 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:03:46.995467 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:03:46.997284 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:03:47.006855 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:03:47.013067 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:03:47.070502 ignition[691]: Ignition 2.19.0 Oct 9 01:03:47.070514 ignition[691]: Stage: fetch-offline Oct 9 01:03:47.070553 ignition[691]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:03:47.070564 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:03:47.070650 ignition[691]: parsed url from cmdline: "" Oct 9 01:03:47.070655 ignition[691]: no config URL provided Oct 9 01:03:47.070660 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:03:47.070669 ignition[691]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:03:47.070697 ignition[691]: op(1): [started] loading QEMU firmware config module Oct 9 01:03:47.070702 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 01:03:47.078934 ignition[691]: op(1): [finished] loading QEMU firmware config module Oct 9 01:03:47.089710 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:03:47.100981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:03:47.121397 ignition[691]: parsing config with SHA512: dc1ca9ce7510e14c60b345c88d7d493256edca416f3331dc6af9d96232babb175bfe1052513e8a31f1d1360522348e4014ef04958b3473dff58ffe76321e7305 Oct 9 01:03:47.121925 systemd-networkd[787]: lo: Link UP Oct 9 01:03:47.121933 systemd-networkd[787]: lo: Gained carrier Oct 9 01:03:47.123632 systemd-networkd[787]: Enumeration completed Oct 9 01:03:47.123720 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:03:47.124019 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:03:47.128689 ignition[691]: fetch-offline: fetch-offline passed Oct 9 01:03:47.124023 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:03:47.128769 ignition[691]: Ignition finished successfully Oct 9 01:03:47.124740 systemd-networkd[787]: eth0: Link UP Oct 9 01:03:47.124743 systemd-networkd[787]: eth0: Gained carrier Oct 9 01:03:47.124749 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:03:47.126648 systemd[1]: Reached target network.target - Network. Oct 9 01:03:47.128208 unknown[691]: fetched base config from "system" Oct 9 01:03:47.128218 unknown[691]: fetched user config from "qemu" Oct 9 01:03:47.131245 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:03:47.133606 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 01:03:47.137874 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:03:47.137985 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:03:47.155700 ignition[790]: Ignition 2.19.0 Oct 9 01:03:47.155711 ignition[790]: Stage: kargs Oct 9 01:03:47.155927 ignition[790]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:03:47.155941 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:03:47.157149 ignition[790]: kargs: kargs passed Oct 9 01:03:47.157204 ignition[790]: Ignition finished successfully Oct 9 01:03:47.160890 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:03:47.173973 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:03:47.186886 ignition[799]: Ignition 2.19.0 Oct 9 01:03:47.186897 ignition[799]: Stage: disks Oct 9 01:03:47.187066 ignition[799]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:03:47.187078 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:03:47.187949 ignition[799]: disks: disks passed Oct 9 01:03:47.187989 ignition[799]: Ignition finished successfully Oct 9 01:03:47.193966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:03:47.194461 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:03:47.196192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:03:47.198560 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:03:47.199069 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:03:47.199410 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:03:47.215957 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:03:47.230340 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:03:47.238218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:03:47.243983 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:03:47.329849 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:03:47.330055 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:03:47.331143 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:03:47.339932 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:03:47.341769 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:03:47.342489 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:03:47.347930 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) Oct 9 01:03:47.342531 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:03:47.352964 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:03:47.352987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:03:47.352998 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:03:47.342553 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:03:47.354935 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:03:47.356975 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:03:47.384266 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:03:47.385569 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:03:47.420906 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:03:47.425924 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:03:47.430335 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:03:47.434859 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:03:47.518910 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:03:47.530013 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:03:47.532820 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:03:47.541862 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:03:47.559031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:03:47.563126 ignition[932]: INFO : Ignition 2.19.0 Oct 9 01:03:47.563126 ignition[932]: INFO : Stage: mount Oct 9 01:03:47.564723 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:03:47.564723 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:03:47.564723 ignition[932]: INFO : mount: mount passed Oct 9 01:03:47.564723 ignition[932]: INFO : Ignition finished successfully Oct 9 01:03:47.570318 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:03:47.588927 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:03:47.952272 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:03:47.965000 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:03:47.971864 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (947) Oct 9 01:03:47.971896 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:03:47.974418 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:03:47.974442 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:03:47.976861 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:03:47.978402 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:03:48.005545 ignition[965]: INFO : Ignition 2.19.0 Oct 9 01:03:48.005545 ignition[965]: INFO : Stage: files Oct 9 01:03:48.007410 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:03:48.007410 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:03:48.007410 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:03:48.010644 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:03:48.010644 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:03:48.015208 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:03:48.016769 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:03:48.018568 unknown[965]: wrote ssh authorized keys file for user: core Oct 9 01:03:48.019737 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:03:48.021157 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:03:48.021157 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:03:48.056688 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:03:48.151210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:03:48.151210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 01:03:48.155190 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 9 01:03:48.599021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 01:03:48.706308 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 01:03:48.706308 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:03:48.710164 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:03:48.712073 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:03:48.714072 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:03:48.716000 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:03:48.717984 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:03:48.719905 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:03:48.721891 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:03:48.724328 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:03:48.726299 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:03:48.726299 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:03:48.726299 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:03:48.726299 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:03:48.726299 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 9 01:03:48.750979 systemd-networkd[787]: eth0: Gained IPv6LL Oct 9 01:03:49.116590 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 01:03:49.496079 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:03:49.496079 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 9 01:03:49.499732 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 9 01:03:49.501759 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 01:03:49.528070 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:03:49.532394 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:03:49.533928 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 01:03:49.533928 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:03:49.533928 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:03:49.533928 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:03:49.533928 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:03:49.533928 ignition[965]: INFO : files: files passed Oct 9 01:03:49.533928 ignition[965]: INFO : Ignition finished successfully Oct 9 01:03:49.544546 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:03:49.551964 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:03:49.552956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:03:49.559621 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:03:49.559734 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:03:49.564169 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 01:03:49.567606 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:03:49.569249 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:03:49.570759 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:03:49.573467 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:03:49.574877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:03:49.583946 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:03:49.605984 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:03:49.606105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:03:49.608347 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:03:49.610400 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:03:49.611454 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:03:49.627954 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:03:49.639443 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:03:49.647982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:03:49.656921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:03:49.658172 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:03:49.660409 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:03:49.662408 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:03:49.662512 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:03:49.664701 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:03:49.666406 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:03:49.668468 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:03:49.670529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:03:49.672500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:03:49.674625 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:03:49.676715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:03:49.678969 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:03:49.681007 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:03:49.683194 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:03:49.684940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:03:49.685043 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:03:49.687182 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:03:49.688752 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:03:49.690799 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:03:49.690884 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:03:49.693075 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:03:49.693185 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:03:49.695454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:03:49.695559 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:03:49.697576 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:03:49.699361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:03:49.702891 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:03:49.704966 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:03:49.706934 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:03:49.708671 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:03:49.708759 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:03:49.710684 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:03:49.710768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:03:49.713114 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:03:49.713232 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:03:49.715158 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:03:49.715265 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:03:49.736959 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:03:49.737878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:03:49.737986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:03:49.740810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:03:49.742187 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:03:49.742307 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:03:49.751484 ignition[1018]: INFO : Ignition 2.19.0 Oct 9 01:03:49.751484 ignition[1018]: INFO : Stage: umount Oct 9 01:03:49.751484 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:03:49.751484 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:03:49.745104 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:03:49.759852 ignition[1018]: INFO : umount: umount passed Oct 9 01:03:49.759852 ignition[1018]: INFO : Ignition finished successfully Oct 9 01:03:49.745222 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:03:49.753597 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:03:49.753736 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:03:49.756457 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:03:49.756599 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:03:49.760641 systemd[1]: Stopped target network.target - Network. Oct 9 01:03:49.761737 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:03:49.761803 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:03:49.763858 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:03:49.763919 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:03:49.766135 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:03:49.766206 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:03:49.768537 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:03:49.768596 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:03:49.771273 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:03:49.773376 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:03:49.776637 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:03:49.776863 systemd-networkd[787]: eth0: DHCPv6 lease lost Oct 9 01:03:49.779460 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:03:49.779625 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:03:49.781377 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:03:49.781543 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:03:49.785144 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:03:49.785217 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:03:49.791973 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:03:49.793430 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:03:49.793495 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:03:49.794877 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:03:49.794937 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:03:49.796899 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:03:49.796959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:03:49.799106 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:03:49.799164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:03:49.801867 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:03:49.812252 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:03:49.812375 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:03:49.824547 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:03:49.824766 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:03:49.827209 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:03:49.827271 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:03:49.829550 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:03:49.829600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:03:49.831777 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:03:49.831858 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:03:49.834394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:03:49.834454 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:03:49.836373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:03:49.836432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:03:49.842973 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:03:49.844902 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:03:49.844955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:03:49.847402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:03:49.847449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:49.849959 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:03:49.850058 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:03:49.951656 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:03:49.951819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:03:49.954364 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:03:49.955697 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:03:49.955762 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:03:49.968991 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:03:49.976797 systemd[1]: Switching root. Oct 9 01:03:50.009359 systemd-journald[192]: Journal stopped Oct 9 01:03:51.116875 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 9 01:03:51.116951 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:03:51.116975 kernel: SELinux: policy capability open_perms=1 Oct 9 01:03:51.116990 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:03:51.117005 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:03:51.117020 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:03:51.117035 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:03:51.117056 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:03:51.117075 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:03:51.117090 kernel: audit: type=1403 audit(1728435830.399:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:03:51.117111 systemd[1]: Successfully loaded SELinux policy in 39.905ms. Oct 9 01:03:51.117129 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.493ms. Oct 9 01:03:51.117157 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:03:51.117174 systemd[1]: Detected virtualization kvm. Oct 9 01:03:51.117190 systemd[1]: Detected architecture x86-64. Oct 9 01:03:51.117206 systemd[1]: Detected first boot. Oct 9 01:03:51.117226 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:03:51.117244 zram_generator::config[1062]: No configuration found. Oct 9 01:03:51.117262 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:03:51.117279 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:03:51.117294 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:03:51.117311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:03:51.117328 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:03:51.117345 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:03:51.117361 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:03:51.117381 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:03:51.117398 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:03:51.117426 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:03:51.117445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:03:51.117461 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:03:51.117478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:03:51.117500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:03:51.117517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:03:51.117536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:03:51.117553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:03:51.117569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:03:51.117585 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:03:51.117602 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:03:51.117620 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:03:51.117636 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:03:51.117652 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:03:51.117673 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:03:51.117688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:03:51.117703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:03:51.117720 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:03:51.117736 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:03:51.117752 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:03:51.117768 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:03:51.117784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:03:51.117801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:03:51.117821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:03:51.117859 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:03:51.117876 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:03:51.117892 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:03:51.117908 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:03:51.117924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:03:51.117940 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:03:51.117956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:03:51.117971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:03:51.117994 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:03:51.118010 systemd[1]: Reached target machines.target - Containers. Oct 9 01:03:51.118026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:03:51.118043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:03:51.118059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:03:51.118075 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:03:51.118091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:03:51.118107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:03:51.118126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:03:51.118152 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:03:51.118168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:03:51.118181 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:03:51.118193 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:03:51.118205 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:03:51.118216 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:03:51.118228 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:03:51.118239 kernel: loop: module loaded Oct 9 01:03:51.118259 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:03:51.118270 kernel: fuse: init (API version 7.39) Oct 9 01:03:51.118281 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:03:51.118293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:03:51.118306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:03:51.118319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:03:51.118331 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:03:51.118342 systemd[1]: Stopped verity-setup.service. Oct 9 01:03:51.118356 kernel: ACPI: bus type drm_connector registered Oct 9 01:03:51.118375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:03:51.118413 systemd-journald[1132]: Collecting audit messages is disabled. Oct 9 01:03:51.118442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:03:51.118458 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:03:51.118474 systemd-journald[1132]: Journal started Oct 9 01:03:51.118505 systemd-journald[1132]: Runtime Journal (/run/log/journal/9e47ee7bb27241fcbe398daf874fd03b) is 6.0M, max 48.3M, 42.2M free. Oct 9 01:03:50.891278 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:03:50.912365 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:03:50.912786 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:03:51.122526 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:03:51.123295 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:03:51.124447 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:03:51.125749 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:03:51.127034 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:03:51.128303 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:03:51.129812 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:03:51.131452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:03:51.131622 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:03:51.133263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:03:51.133428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:03:51.134895 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:03:51.135063 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:03:51.136523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:03:51.136683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:03:51.138239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:03:51.138399 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:03:51.139908 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:03:51.140074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:03:51.141459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:03:51.143021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:03:51.144556 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:03:51.158770 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:03:51.170937 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:03:51.173470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:03:51.174732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:03:51.174767 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:03:51.177190 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:03:51.179713 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:03:51.182103 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:03:51.183345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:03:51.187095 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:03:51.191007 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:03:51.192471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:03:51.194902 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:03:51.196339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:03:51.198025 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:03:51.203477 systemd-journald[1132]: Time spent on flushing to /var/log/journal/9e47ee7bb27241fcbe398daf874fd03b is 16.376ms for 1026 entries. Oct 9 01:03:51.203477 systemd-journald[1132]: System Journal (/var/log/journal/9e47ee7bb27241fcbe398daf874fd03b) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:03:51.226109 systemd-journald[1132]: Received client request to flush runtime journal. Oct 9 01:03:51.213138 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:03:51.217288 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:03:51.222988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:03:51.228385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:03:51.229925 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:03:51.232182 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:03:51.234264 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:03:51.236200 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:03:51.239854 kernel: loop0: detected capacity change from 0 to 140992 Oct 9 01:03:51.244488 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:03:51.255368 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:03:51.262045 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:03:51.263967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:03:51.269848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:03:51.267396 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:03:51.274744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:03:51.282274 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 01:03:51.284782 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:03:51.285584 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:03:51.297311 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Oct 9 01:03:51.297327 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Oct 9 01:03:51.302923 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:03:51.305850 kernel: loop1: detected capacity change from 0 to 210664 Oct 9 01:03:51.339947 kernel: loop2: detected capacity change from 0 to 138192 Oct 9 01:03:51.387866 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 01:03:51.399867 kernel: loop4: detected capacity change from 0 to 210664 Oct 9 01:03:51.407864 kernel: loop5: detected capacity change from 0 to 138192 Oct 9 01:03:51.418112 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 01:03:51.419773 (sd-merge)[1200]: Merged extensions into '/usr'. Oct 9 01:03:51.423849 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:03:51.423864 systemd[1]: Reloading... Oct 9 01:03:51.473866 zram_generator::config[1226]: No configuration found. Oct 9 01:03:51.511503 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:03:51.599369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:03:51.653825 systemd[1]: Reloading finished in 229 ms. Oct 9 01:03:51.688264 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:03:51.689856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:03:51.704090 systemd[1]: Starting ensure-sysext.service... Oct 9 01:03:51.706311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:03:51.713177 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:03:51.713191 systemd[1]: Reloading... Oct 9 01:03:51.732630 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:03:51.733162 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:03:51.734450 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:03:51.735466 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Oct 9 01:03:51.735617 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Oct 9 01:03:51.739947 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:03:51.740053 systemd-tmpfiles[1264]: Skipping /boot Oct 9 01:03:51.755068 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:03:51.755085 systemd-tmpfiles[1264]: Skipping /boot Oct 9 01:03:51.770866 zram_generator::config[1294]: No configuration found. Oct 9 01:03:51.878109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:03:51.927250 systemd[1]: Reloading finished in 213 ms. Oct 9 01:03:51.945613 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:03:51.962317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:03:51.968868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:03:51.971253 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:03:51.973717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:03:51.978160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:03:51.982988 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:03:51.987014 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:03:51.993558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:03:51.994149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:03:52.004233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:03:52.010630 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Oct 9 01:03:52.011068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:03:52.014993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:03:52.016555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:03:52.019065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:03:52.020459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:03:52.022610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:03:52.025142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:03:52.025347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:03:52.027869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:03:52.028084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:03:52.030622 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:03:52.030927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:03:52.038158 augenrules[1359]: No rules Oct 9 01:03:52.039517 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:03:52.039777 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:03:52.043405 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:03:52.045611 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:03:52.060373 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:03:52.068076 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:03:52.069469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:03:52.072996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:03:52.075727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:03:52.080617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:03:52.086037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:03:52.087488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:03:52.091980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:03:52.096018 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:03:52.098928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:03:52.099505 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:03:52.102728 systemd[1]: Finished ensure-sysext.service. Oct 9 01:03:52.104500 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:03:52.107208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:03:52.108226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:03:52.109973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Oct 9 01:03:52.111477 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:03:52.111692 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:03:52.114383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:03:52.114614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:03:52.118050 augenrules[1388]: /sbin/augenrules: No change Oct 9 01:03:52.122854 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1384) Oct 9 01:03:52.122934 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1384) Oct 9 01:03:52.129424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:03:52.136190 augenrules[1423]: No rules Oct 9 01:03:52.143290 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:03:52.143596 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:03:52.146377 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:03:52.146604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:03:52.154723 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 01:03:52.159573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:03:52.159643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:03:52.176042 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:03:52.177391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:03:52.184462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:03:52.199044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:03:52.205869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 01:03:52.212132 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:03:52.226558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:03:52.226673 systemd-resolved[1334]: Positive Trust Anchors: Oct 9 01:03:52.226690 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:03:52.226730 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:03:52.231277 systemd-resolved[1334]: Defaulting to hostname 'linux'. Oct 9 01:03:52.234038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:03:52.235396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:03:52.235528 systemd-networkd[1399]: lo: Link UP Oct 9 01:03:52.235534 systemd-networkd[1399]: lo: Gained carrier Oct 9 01:03:52.237756 systemd-networkd[1399]: Enumeration completed Oct 9 01:03:52.237902 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:03:52.238280 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:03:52.238292 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:03:52.239855 systemd[1]: Reached target network.target - Network. Oct 9 01:03:52.240368 systemd-networkd[1399]: eth0: Link UP Oct 9 01:03:52.240377 systemd-networkd[1399]: eth0: Gained carrier Oct 9 01:03:52.240389 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:03:52.248080 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:03:52.252932 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:03:52.257856 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 01:03:52.264706 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 9 01:03:52.265069 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 01:03:52.265274 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 01:03:52.265492 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 01:03:52.276757 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:03:52.278962 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:03:53.936022 systemd-resolved[1334]: Clock change detected. Flushing caches. Oct 9 01:03:53.936303 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 01:03:53.936360 systemd-timesyncd[1434]: Initial clock synchronization to Wed 2024-10-09 01:03:53.935969 UTC. Oct 9 01:03:53.945374 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:03:53.951662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:53.956990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:03:53.957190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:53.976497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:03:54.021428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:03:54.045493 kernel: kvm_amd: TSC scaling supported Oct 9 01:03:54.045532 kernel: kvm_amd: Nested Virtualization enabled Oct 9 01:03:54.045570 kernel: kvm_amd: Nested Paging enabled Oct 9 01:03:54.045583 kernel: kvm_amd: LBR virtualization supported Oct 9 01:03:54.046557 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 01:03:54.046582 kernel: kvm_amd: Virtual GIF supported Oct 9 01:03:54.063353 kernel: EDAC MC: Ver: 3.0.0 Oct 9 01:03:54.092984 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:03:54.107488 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:03:54.115430 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:03:54.146590 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:03:54.148100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:03:54.149231 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:03:54.150407 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:03:54.151681 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:03:54.153120 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:03:54.154355 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:03:54.155769 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:03:54.157012 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:03:54.157040 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:03:54.157960 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:03:54.159627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:03:54.162159 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:03:54.174661 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:03:54.176926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:03:54.178656 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:03:54.180263 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:03:54.181294 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:03:54.182298 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:03:54.182326 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:03:54.183627 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:03:54.185737 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:03:54.188416 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:03:54.189666 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:03:54.193386 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:03:54.195123 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:03:54.196573 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:03:54.200296 jq[1465]: false Oct 9 01:03:54.208466 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:03:54.210503 extend-filesystems[1466]: Found loop3 Oct 9 01:03:54.210503 extend-filesystems[1466]: Found loop4 Oct 9 01:03:54.210503 extend-filesystems[1466]: Found loop5 Oct 9 01:03:54.210503 extend-filesystems[1466]: Found sr0 Oct 9 01:03:54.210503 extend-filesystems[1466]: Found vda Oct 9 01:03:54.210503 extend-filesystems[1466]: Found vda1 Oct 9 01:03:54.216212 extend-filesystems[1466]: Found vda2 Oct 9 01:03:54.216212 extend-filesystems[1466]: Found vda3 Oct 9 01:03:54.216212 extend-filesystems[1466]: Found usr Oct 9 01:03:54.216212 extend-filesystems[1466]: Found vda4 Oct 9 01:03:54.216212 extend-filesystems[1466]: Found vda6 Oct 9 01:03:54.216212 extend-filesystems[1466]: Found vda7 Oct 9 01:03:54.216212 extend-filesystems[1466]: Found vda9 Oct 9 01:03:54.216212 extend-filesystems[1466]: Checking size of /dev/vda9 Oct 9 01:03:54.212095 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:03:54.211452 dbus-daemon[1464]: [system] SELinux support is enabled Oct 9 01:03:54.229450 extend-filesystems[1466]: Resized partition /dev/vda9 Oct 9 01:03:54.219908 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:03:54.231760 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:03:54.228613 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:03:54.232003 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:03:54.232558 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:03:54.234174 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:03:54.237376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Oct 9 01:03:54.237633 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 01:03:54.244509 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:03:54.247020 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:03:54.255551 jq[1486]: true Oct 9 01:03:54.253208 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:03:54.266773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:03:54.267115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:03:54.267571 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:03:54.268027 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:03:54.272065 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 01:03:54.272105 update_engine[1484]: I20241009 01:03:54.271271 1484 main.cc:92] Flatcar Update Engine starting Oct 9 01:03:54.271214 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:03:54.297764 update_engine[1484]: I20241009 01:03:54.275483 1484 update_check_scheduler.cc:74] Next update check in 8m51s Oct 9 01:03:54.272788 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:03:54.300454 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:03:54.300454 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:03:54.300454 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 01:03:54.285859 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:03:54.311097 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Oct 9 01:03:54.314275 jq[1491]: true Oct 9 01:03:54.299764 systemd-logind[1481]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 01:03:54.299789 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:03:54.302083 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:03:54.302323 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:03:54.303580 systemd-logind[1481]: New seat seat0. Oct 9 01:03:54.310822 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:03:54.320050 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:03:54.321365 tar[1490]: linux-amd64/helm Oct 9 01:03:54.322735 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:03:54.322878 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:03:54.326521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:03:54.326647 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:03:54.341479 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:03:54.366776 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:03:54.367831 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:03:54.370773 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 01:03:54.380479 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:03:54.418481 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:03:54.442132 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:03:54.448736 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:03:54.457044 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:03:54.457273 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:03:54.467579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:03:54.478494 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:03:54.482607 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:03:54.485686 containerd[1492]: time="2024-10-09T01:03:54.485607998Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:03:54.485732 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:03:54.487507 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:03:54.511793 containerd[1492]: time="2024-10-09T01:03:54.511748922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.513417 containerd[1492]: time="2024-10-09T01:03:54.513379158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:03:54.513417 containerd[1492]: time="2024-10-09T01:03:54.513405859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:03:54.513466 containerd[1492]: time="2024-10-09T01:03:54.513421358Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:03:54.513818 containerd[1492]: time="2024-10-09T01:03:54.513787985Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:03:54.513818 containerd[1492]: time="2024-10-09T01:03:54.513810277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.513900 containerd[1492]: time="2024-10-09T01:03:54.513874267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:03:54.513900 containerd[1492]: time="2024-10-09T01:03:54.513890387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514101 containerd[1492]: time="2024-10-09T01:03:54.514073150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514101 containerd[1492]: time="2024-10-09T01:03:54.514091755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514146 containerd[1492]: time="2024-10-09T01:03:54.514104699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514146 containerd[1492]: time="2024-10-09T01:03:54.514114157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514231 containerd[1492]: time="2024-10-09T01:03:54.514204276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514477 containerd[1492]: time="2024-10-09T01:03:54.514449135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514604 containerd[1492]: time="2024-10-09T01:03:54.514572195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:03:54.514604 containerd[1492]: time="2024-10-09T01:03:54.514597242Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:03:54.514712 containerd[1492]: time="2024-10-09T01:03:54.514691980Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:03:54.514762 containerd[1492]: time="2024-10-09T01:03:54.514747634Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:03:54.521996 containerd[1492]: time="2024-10-09T01:03:54.521959667Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:03:54.522031 containerd[1492]: time="2024-10-09T01:03:54.521999041Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:03:54.522031 containerd[1492]: time="2024-10-09T01:03:54.522020230Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:03:54.522068 containerd[1492]: time="2024-10-09T01:03:54.522034688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:03:54.522068 containerd[1492]: time="2024-10-09T01:03:54.522046850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:03:54.522190 containerd[1492]: time="2024-10-09T01:03:54.522163730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:03:54.522397 containerd[1492]: time="2024-10-09T01:03:54.522371138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:03:54.522494 containerd[1492]: time="2024-10-09T01:03:54.522471988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:03:54.522494 containerd[1492]: time="2024-10-09T01:03:54.522490622Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:03:54.522533 containerd[1492]: time="2024-10-09T01:03:54.522504268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:03:54.522533 containerd[1492]: time="2024-10-09T01:03:54.522517413Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522533 containerd[1492]: time="2024-10-09T01:03:54.522529786Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522592 containerd[1492]: time="2024-10-09T01:03:54.522540867Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522592 containerd[1492]: time="2024-10-09T01:03:54.522553130Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522592 containerd[1492]: time="2024-10-09T01:03:54.522565813Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522592 containerd[1492]: time="2024-10-09T01:03:54.522578217Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522666 containerd[1492]: time="2024-10-09T01:03:54.522601120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522666 containerd[1492]: time="2024-10-09T01:03:54.522611870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:03:54.522666 containerd[1492]: time="2024-10-09T01:03:54.522630144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522666 containerd[1492]: time="2024-10-09T01:03:54.522642938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522666 containerd[1492]: time="2024-10-09T01:03:54.522654369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522666 containerd[1492]: time="2024-10-09T01:03:54.522666412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522678645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522698562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522713751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522736524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522754026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522770277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522782710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522796827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522811 containerd[1492]: time="2024-10-09T01:03:54.522810122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522828005Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522848854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522862630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522874793Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522913896Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522927472Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522937080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522948441Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522957237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522968268Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522978658Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:03:54.522982 containerd[1492]: time="2024-10-09T01:03:54.522988175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:03:54.523275 containerd[1492]: time="2024-10-09T01:03:54.523227815Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:03:54.523275 containerd[1492]: time="2024-10-09T01:03:54.523273090Z" level=info msg="Connect containerd service" Oct 9 01:03:54.523433 containerd[1492]: time="2024-10-09T01:03:54.523301813Z" level=info msg="using legacy CRI server" Oct 9 01:03:54.523433 containerd[1492]: time="2024-10-09T01:03:54.523308316Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:03:54.523433 containerd[1492]: time="2024-10-09T01:03:54.523401240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:03:54.523956 containerd[1492]: time="2024-10-09T01:03:54.523922678Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:03:54.524217 containerd[1492]: time="2024-10-09T01:03:54.524175341Z" level=info msg="Start subscribing containerd event" Oct 9 01:03:54.524239 containerd[1492]: time="2024-10-09T01:03:54.524215436Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:03:54.524239 containerd[1492]: time="2024-10-09T01:03:54.524220416Z" level=info msg="Start recovering state" Oct 9 01:03:54.524275 containerd[1492]: time="2024-10-09T01:03:54.524261393Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:03:54.524303 containerd[1492]: time="2024-10-09T01:03:54.524288884Z" level=info msg="Start event monitor" Oct 9 01:03:54.524323 containerd[1492]: time="2024-10-09T01:03:54.524303922Z" level=info msg="Start snapshots syncer" Oct 9 01:03:54.524323 containerd[1492]: time="2024-10-09T01:03:54.524312589Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:03:54.524323 containerd[1492]: time="2024-10-09T01:03:54.524319572Z" level=info msg="Start streaming server" Oct 9 01:03:54.524463 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:03:54.525675 containerd[1492]: time="2024-10-09T01:03:54.525641861Z" level=info msg="containerd successfully booted in 0.040983s" Oct 9 01:03:54.681309 tar[1490]: linux-amd64/LICENSE Oct 9 01:03:54.681401 tar[1490]: linux-amd64/README.md Oct 9 01:03:54.699671 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:03:55.335536 systemd-networkd[1399]: eth0: Gained IPv6LL Oct 9 01:03:55.338806 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:03:55.340895 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:03:55.352555 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 01:03:55.354948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:55.357108 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:03:55.377184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:03:55.378768 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 01:03:55.378969 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 01:03:55.381037 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:03:55.966859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:55.968560 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:03:55.970709 systemd[1]: Startup finished in 703ms (kernel) + 5.699s (initrd) + 3.952s (userspace) = 10.355s. Oct 9 01:03:55.971866 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:56.400827 kubelet[1577]: E1009 01:03:56.400673 1577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:56.404961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:56.405157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:04:00.870939 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:04:00.872206 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:44408.service - OpenSSH per-connection server daemon (10.0.0.1:44408). Oct 9 01:04:00.913549 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 44408 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:00.915584 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:00.923591 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:04:00.938555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:04:00.940216 systemd-logind[1481]: New session 1 of user core. Oct 9 01:04:00.950320 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:04:00.961592 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:04:00.964900 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:04:01.108927 systemd[1595]: Queued start job for default target default.target. Oct 9 01:04:01.126843 systemd[1595]: Created slice app.slice - User Application Slice. Oct 9 01:04:01.126875 systemd[1595]: Reached target paths.target - Paths. Oct 9 01:04:01.126893 systemd[1595]: Reached target timers.target - Timers. Oct 9 01:04:01.128701 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:04:01.141735 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:04:01.141852 systemd[1595]: Reached target sockets.target - Sockets. Oct 9 01:04:01.141871 systemd[1595]: Reached target basic.target - Basic System. Oct 9 01:04:01.141906 systemd[1595]: Reached target default.target - Main User Target. Oct 9 01:04:01.141938 systemd[1595]: Startup finished in 169ms. Oct 9 01:04:01.142580 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:04:01.144310 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:04:01.204689 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:44424.service - OpenSSH per-connection server daemon (10.0.0.1:44424). Oct 9 01:04:01.240466 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 44424 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:01.242057 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:01.246087 systemd-logind[1481]: New session 2 of user core. Oct 9 01:04:01.257469 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:04:01.312586 sshd[1606]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:01.320168 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:44424.service: Deactivated successfully. Oct 9 01:04:01.322080 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:04:01.323902 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:04:01.333791 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:44432.service - OpenSSH per-connection server daemon (10.0.0.1:44432). Oct 9 01:04:01.334772 systemd-logind[1481]: Removed session 2. Oct 9 01:04:01.363650 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 44432 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:01.365242 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:01.369253 systemd-logind[1481]: New session 3 of user core. Oct 9 01:04:01.378483 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:04:01.428411 sshd[1613]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:01.441296 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:44432.service: Deactivated successfully. Oct 9 01:04:01.443187 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:04:01.444901 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:04:01.446165 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:44434.service - OpenSSH per-connection server daemon (10.0.0.1:44434). Oct 9 01:04:01.446900 systemd-logind[1481]: Removed session 3. Oct 9 01:04:01.478962 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 44434 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:01.480566 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:01.484323 systemd-logind[1481]: New session 4 of user core. Oct 9 01:04:01.493466 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:04:01.548190 sshd[1621]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:01.560634 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:44434.service: Deactivated successfully. Oct 9 01:04:01.562738 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:04:01.564243 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:04:01.565656 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:44450.service - OpenSSH per-connection server daemon (10.0.0.1:44450). Oct 9 01:04:01.566425 systemd-logind[1481]: Removed session 4. Oct 9 01:04:01.600785 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 44450 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:01.602511 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:01.606159 systemd-logind[1481]: New session 5 of user core. Oct 9 01:04:01.615489 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:04:01.672873 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:04:01.673227 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:04:01.691362 sudo[1631]: pam_unix(sudo:session): session closed for user root Oct 9 01:04:01.693146 sshd[1628]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:01.711322 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:44450.service: Deactivated successfully. Oct 9 01:04:01.713113 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:04:01.714548 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:04:01.715887 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:44464.service - OpenSSH per-connection server daemon (10.0.0.1:44464). Oct 9 01:04:01.716727 systemd-logind[1481]: Removed session 5. Oct 9 01:04:01.750067 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 44464 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:01.751733 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:01.755869 systemd-logind[1481]: New session 6 of user core. Oct 9 01:04:01.765471 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:04:01.820009 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:04:01.820420 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:04:01.824562 sudo[1640]: pam_unix(sudo:session): session closed for user root Oct 9 01:04:01.830461 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:04:01.830787 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:04:01.857736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:04:01.889555 augenrules[1662]: No rules Oct 9 01:04:01.891313 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:04:01.891582 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:04:01.892931 sudo[1639]: pam_unix(sudo:session): session closed for user root Oct 9 01:04:01.894971 sshd[1636]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:01.906522 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:44464.service: Deactivated successfully. Oct 9 01:04:01.908616 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:04:01.910582 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:04:01.918606 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:44480.service - OpenSSH per-connection server daemon (10.0.0.1:44480). Oct 9 01:04:01.919549 systemd-logind[1481]: Removed session 6. Oct 9 01:04:01.948751 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 44480 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:01.950232 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:01.954140 systemd-logind[1481]: New session 7 of user core. Oct 9 01:04:01.962456 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:04:02.014655 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:04:02.015005 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:04:02.284916 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:04:02.285229 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:04:02.552254 dockerd[1693]: time="2024-10-09T01:04:02.552111680Z" level=info msg="Starting up" Oct 9 01:04:02.651086 dockerd[1693]: time="2024-10-09T01:04:02.651032146Z" level=info msg="Loading containers: start." Oct 9 01:04:02.824373 kernel: Initializing XFRM netlink socket Oct 9 01:04:02.907099 systemd-networkd[1399]: docker0: Link UP Oct 9 01:04:02.942774 dockerd[1693]: time="2024-10-09T01:04:02.942733025Z" level=info msg="Loading containers: done." Oct 9 01:04:02.956995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3650002663-merged.mount: Deactivated successfully. Oct 9 01:04:02.960213 dockerd[1693]: time="2024-10-09T01:04:02.960169178Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:04:02.960300 dockerd[1693]: time="2024-10-09T01:04:02.960272882Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:04:02.960440 dockerd[1693]: time="2024-10-09T01:04:02.960415900Z" level=info msg="Daemon has completed initialization" Oct 9 01:04:02.997048 dockerd[1693]: time="2024-10-09T01:04:02.996979046Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:04:02.997177 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:04:03.683923 containerd[1492]: time="2024-10-09T01:04:03.683860909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:04:04.682803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860949071.mount: Deactivated successfully. Oct 9 01:04:05.962522 containerd[1492]: time="2024-10-09T01:04:05.962469329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:05.981706 containerd[1492]: time="2024-10-09T01:04:05.981636187Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 9 01:04:05.997756 containerd[1492]: time="2024-10-09T01:04:05.997729783Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:06.028892 containerd[1492]: time="2024-10-09T01:04:06.028818915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:06.030284 containerd[1492]: time="2024-10-09T01:04:06.030211466Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 2.346298128s" Oct 9 01:04:06.030284 containerd[1492]: time="2024-10-09T01:04:06.030272040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 9 01:04:06.052136 containerd[1492]: time="2024-10-09T01:04:06.052094607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:04:06.655448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:04:06.665509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:06.840176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:06.844884 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:04:06.932944 kubelet[1965]: E1009 01:04:06.932760 1965 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:04:06.940165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:04:06.940399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:04:08.504269 containerd[1492]: time="2024-10-09T01:04:08.504199072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:08.505142 containerd[1492]: time="2024-10-09T01:04:08.505094491Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 9 01:04:08.506438 containerd[1492]: time="2024-10-09T01:04:08.506402243Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:08.509295 containerd[1492]: time="2024-10-09T01:04:08.509264219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:08.510468 containerd[1492]: time="2024-10-09T01:04:08.510433842Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 2.458306194s" Oct 9 01:04:08.510528 containerd[1492]: time="2024-10-09T01:04:08.510475390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 9 01:04:08.536152 containerd[1492]: time="2024-10-09T01:04:08.536103292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:04:10.388242 containerd[1492]: time="2024-10-09T01:04:10.388191387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:10.389147 containerd[1492]: time="2024-10-09T01:04:10.389095082Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 9 01:04:10.390624 containerd[1492]: time="2024-10-09T01:04:10.390565358Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:10.393508 containerd[1492]: time="2024-10-09T01:04:10.393488369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:10.396348 containerd[1492]: time="2024-10-09T01:04:10.395699936Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 1.859559394s" Oct 9 01:04:10.396348 containerd[1492]: time="2024-10-09T01:04:10.395733709Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 9 01:04:10.417802 containerd[1492]: time="2024-10-09T01:04:10.417735242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:04:11.595666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814674207.mount: Deactivated successfully. Oct 9 01:04:11.861602 containerd[1492]: time="2024-10-09T01:04:11.861454014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:11.862292 containerd[1492]: time="2024-10-09T01:04:11.862258793Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 9 01:04:11.863568 containerd[1492]: time="2024-10-09T01:04:11.863536920Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:11.865745 containerd[1492]: time="2024-10-09T01:04:11.865692722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:11.866203 containerd[1492]: time="2024-10-09T01:04:11.866168143Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 1.448388117s" Oct 9 01:04:11.866203 containerd[1492]: time="2024-10-09T01:04:11.866195224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 9 01:04:11.891724 containerd[1492]: time="2024-10-09T01:04:11.891676291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:04:14.391290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955050571.mount: Deactivated successfully. Oct 9 01:04:15.746734 containerd[1492]: time="2024-10-09T01:04:15.746661457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:15.747799 containerd[1492]: time="2024-10-09T01:04:15.747748856Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 01:04:15.749631 containerd[1492]: time="2024-10-09T01:04:15.749599055Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:15.753080 containerd[1492]: time="2024-10-09T01:04:15.753051398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:15.754101 containerd[1492]: time="2024-10-09T01:04:15.754071000Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.862353872s" Oct 9 01:04:15.754155 containerd[1492]: time="2024-10-09T01:04:15.754102008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:04:15.777717 containerd[1492]: time="2024-10-09T01:04:15.777666131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:04:16.294307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709002387.mount: Deactivated successfully. Oct 9 01:04:16.301318 containerd[1492]: time="2024-10-09T01:04:16.301279105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:16.302042 containerd[1492]: time="2024-10-09T01:04:16.301997242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 01:04:16.303159 containerd[1492]: time="2024-10-09T01:04:16.303125918Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:16.305284 containerd[1492]: time="2024-10-09T01:04:16.305250311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:16.306095 containerd[1492]: time="2024-10-09T01:04:16.306065600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 528.356488ms" Oct 9 01:04:16.306127 containerd[1492]: time="2024-10-09T01:04:16.306097970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:04:16.327673 containerd[1492]: time="2024-10-09T01:04:16.327629031Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:04:16.904242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269161287.mount: Deactivated successfully. Oct 9 01:04:17.147477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:04:17.156613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:17.303294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:17.312969 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:04:17.418223 kubelet[2092]: E1009 01:04:17.417974 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:04:17.422424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:04:17.422662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:04:19.002982 containerd[1492]: time="2024-10-09T01:04:19.002907828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:19.003782 containerd[1492]: time="2024-10-09T01:04:19.003712747Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 9 01:04:19.005071 containerd[1492]: time="2024-10-09T01:04:19.005016582Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:19.007987 containerd[1492]: time="2024-10-09T01:04:19.007956124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:19.009249 containerd[1492]: time="2024-10-09T01:04:19.009219292Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.681552871s" Oct 9 01:04:19.009306 containerd[1492]: time="2024-10-09T01:04:19.009248287Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 9 01:04:21.594441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:21.605578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:21.623037 systemd[1]: Reloading requested from client PID 2217 ('systemctl') (unit session-7.scope)... Oct 9 01:04:21.623053 systemd[1]: Reloading... Oct 9 01:04:21.699382 zram_generator::config[2257]: No configuration found. Oct 9 01:04:21.917978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:04:21.994011 systemd[1]: Reloading finished in 370 ms. Oct 9 01:04:22.036535 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:04:22.036631 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:04:22.036903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:22.038451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:22.189569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:22.194806 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:04:22.231771 kubelet[2304]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:22.231771 kubelet[2304]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:04:22.231771 kubelet[2304]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:22.232761 kubelet[2304]: I1009 01:04:22.232716 2304 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:04:22.820581 kubelet[2304]: I1009 01:04:22.820536 2304 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:04:22.820581 kubelet[2304]: I1009 01:04:22.820575 2304 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:04:22.821670 kubelet[2304]: I1009 01:04:22.820973 2304 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:04:22.836641 kubelet[2304]: I1009 01:04:22.836602 2304 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:04:22.837142 kubelet[2304]: E1009 01:04:22.837118 2304 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.847166 kubelet[2304]: I1009 01:04:22.847147 2304 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:04:22.848227 kubelet[2304]: I1009 01:04:22.848183 2304 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:04:22.848389 kubelet[2304]: I1009 01:04:22.848220 2304 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:04:22.848795 kubelet[2304]: I1009 01:04:22.848772 2304 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:04:22.848795 kubelet[2304]: I1009 01:04:22.848788 2304 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:04:22.848924 kubelet[2304]: I1009 01:04:22.848905 2304 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:22.849535 kubelet[2304]: I1009 01:04:22.849515 2304 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:04:22.849535 kubelet[2304]: I1009 01:04:22.849533 2304 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:04:22.849585 kubelet[2304]: I1009 01:04:22.849552 2304 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:04:22.849585 kubelet[2304]: I1009 01:04:22.849570 2304 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:04:22.852089 kubelet[2304]: W1009 01:04:22.852018 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.852089 kubelet[2304]: E1009 01:04:22.852061 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.852412 kubelet[2304]: W1009 01:04:22.852379 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.852459 kubelet[2304]: E1009 01:04:22.852415 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.854495 kubelet[2304]: I1009 01:04:22.854477 2304 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:04:22.855658 kubelet[2304]: I1009 01:04:22.855641 2304 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:04:22.855710 kubelet[2304]: W1009 01:04:22.855684 2304 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:04:22.856358 kubelet[2304]: I1009 01:04:22.856343 2304 server.go:1264] "Started kubelet" Oct 9 01:04:22.856493 kubelet[2304]: I1009 01:04:22.856467 2304 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:04:22.857109 kubelet[2304]: I1009 01:04:22.856566 2304 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:04:22.857109 kubelet[2304]: I1009 01:04:22.856909 2304 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:04:22.858570 kubelet[2304]: I1009 01:04:22.858553 2304 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:04:22.860789 kubelet[2304]: I1009 01:04:22.860079 2304 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:04:22.863266 kubelet[2304]: I1009 01:04:22.863207 2304 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:04:22.863973 kubelet[2304]: E1009 01:04:22.863112 2304 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca3427698a357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:04:22.856311639 +0000 UTC m=+0.657806818,LastTimestamp:2024-10-09 01:04:22.856311639 +0000 UTC m=+0.657806818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:04:22.863973 kubelet[2304]: I1009 01:04:22.863601 2304 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:04:22.863973 kubelet[2304]: E1009 01:04:22.863636 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" Oct 9 01:04:22.863973 kubelet[2304]: I1009 01:04:22.863684 2304 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:04:22.864116 kubelet[2304]: W1009 01:04:22.864070 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.864116 kubelet[2304]: E1009 01:04:22.864113 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.864187 kubelet[2304]: I1009 01:04:22.864165 2304 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:04:22.864280 kubelet[2304]: I1009 01:04:22.864259 2304 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:04:22.864724 kubelet[2304]: E1009 01:04:22.864702 2304 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:04:22.865406 kubelet[2304]: I1009 01:04:22.865388 2304 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:04:22.876730 kubelet[2304]: I1009 01:04:22.876693 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:04:22.877954 kubelet[2304]: I1009 01:04:22.877933 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:04:22.878018 kubelet[2304]: I1009 01:04:22.877960 2304 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:04:22.878018 kubelet[2304]: I1009 01:04:22.877979 2304 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:04:22.878058 kubelet[2304]: E1009 01:04:22.878035 2304 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:04:22.882302 kubelet[2304]: W1009 01:04:22.882214 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.882302 kubelet[2304]: E1009 01:04:22.882255 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:22.883008 kubelet[2304]: I1009 01:04:22.882987 2304 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:04:22.883008 kubelet[2304]: I1009 01:04:22.883001 2304 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:04:22.883073 kubelet[2304]: I1009 01:04:22.883017 2304 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:22.964934 kubelet[2304]: I1009 01:04:22.964918 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:04:22.965174 kubelet[2304]: E1009 01:04:22.965145 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Oct 9 01:04:22.978288 kubelet[2304]: E1009 01:04:22.978257 2304 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:04:23.064793 kubelet[2304]: E1009 01:04:23.064741 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" Oct 9 01:04:23.166964 kubelet[2304]: I1009 01:04:23.166941 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:04:23.167179 kubelet[2304]: E1009 01:04:23.167160 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Oct 9 01:04:23.179340 kubelet[2304]: E1009 01:04:23.179304 2304 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:04:23.465812 kubelet[2304]: E1009 01:04:23.465710 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" Oct 9 01:04:23.533544 kubelet[2304]: I1009 01:04:23.533514 2304 policy_none.go:49] "None policy: Start" Oct 9 01:04:23.534130 kubelet[2304]: I1009 01:04:23.534104 2304 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:04:23.534130 kubelet[2304]: I1009 01:04:23.534124 2304 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:04:23.540698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:04:23.557006 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:04:23.559888 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:04:23.568701 kubelet[2304]: I1009 01:04:23.568659 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:04:23.568959 kubelet[2304]: E1009 01:04:23.568934 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Oct 9 01:04:23.569255 kubelet[2304]: I1009 01:04:23.569229 2304 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:04:23.569488 kubelet[2304]: I1009 01:04:23.569444 2304 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:04:23.569696 kubelet[2304]: I1009 01:04:23.569557 2304 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:04:23.570218 kubelet[2304]: E1009 01:04:23.570192 2304 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 01:04:23.580045 kubelet[2304]: I1009 01:04:23.580018 2304 topology_manager.go:215] "Topology Admit Handler" podUID="3fdb280ebbd1b18a096489a75f4bc1f3" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:04:23.580927 kubelet[2304]: I1009 01:04:23.580910 2304 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:04:23.581543 kubelet[2304]: I1009 01:04:23.581510 2304 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:04:23.586729 systemd[1]: Created slice kubepods-burstable-pod3fdb280ebbd1b18a096489a75f4bc1f3.slice - libcontainer container kubepods-burstable-pod3fdb280ebbd1b18a096489a75f4bc1f3.slice. Oct 9 01:04:23.600248 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 9 01:04:23.603430 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 9 01:04:23.668143 kubelet[2304]: I1009 01:04:23.668114 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:23.668205 kubelet[2304]: I1009 01:04:23.668142 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:23.668205 kubelet[2304]: I1009 01:04:23.668161 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:23.668205 kubelet[2304]: I1009 01:04:23.668175 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fdb280ebbd1b18a096489a75f4bc1f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3fdb280ebbd1b18a096489a75f4bc1f3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:23.668205 kubelet[2304]: I1009 01:04:23.668189 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fdb280ebbd1b18a096489a75f4bc1f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3fdb280ebbd1b18a096489a75f4bc1f3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:23.668205 kubelet[2304]: I1009 01:04:23.668201 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fdb280ebbd1b18a096489a75f4bc1f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3fdb280ebbd1b18a096489a75f4bc1f3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:23.668367 kubelet[2304]: I1009 01:04:23.668215 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:23.668367 kubelet[2304]: I1009 01:04:23.668228 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:23.668367 kubelet[2304]: I1009 01:04:23.668262 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:04:23.694661 kubelet[2304]: W1009 01:04:23.694615 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:23.694704 kubelet[2304]: E1009 01:04:23.694668 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:23.854744 kubelet[2304]: W1009 01:04:23.854649 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:23.854744 kubelet[2304]: E1009 01:04:23.854681 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:23.899322 kubelet[2304]: E1009 01:04:23.899294 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:23.899859 containerd[1492]: time="2024-10-09T01:04:23.899813906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3fdb280ebbd1b18a096489a75f4bc1f3,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:23.903013 kubelet[2304]: E1009 01:04:23.902985 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:23.903255 containerd[1492]: time="2024-10-09T01:04:23.903233778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:23.905439 kubelet[2304]: E1009 01:04:23.905417 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:23.905832 containerd[1492]: time="2024-10-09T01:04:23.905641402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:24.011527 kubelet[2304]: W1009 01:04:24.011491 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:24.011527 kubelet[2304]: E1009 01:04:24.011529 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:24.267229 kubelet[2304]: E1009 01:04:24.267166 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="1.6s" Oct 9 01:04:24.337934 kubelet[2304]: W1009 01:04:24.337875 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:24.337979 kubelet[2304]: E1009 01:04:24.337940 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:24.370467 kubelet[2304]: I1009 01:04:24.370429 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:04:24.370808 kubelet[2304]: E1009 01:04:24.370762 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Oct 9 01:04:24.925809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503573978.mount: Deactivated successfully. Oct 9 01:04:24.934183 containerd[1492]: time="2024-10-09T01:04:24.934128936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:24.935890 containerd[1492]: time="2024-10-09T01:04:24.935822712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:04:24.936909 containerd[1492]: time="2024-10-09T01:04:24.936877570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:24.937868 containerd[1492]: time="2024-10-09T01:04:24.937833613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:24.938723 containerd[1492]: time="2024-10-09T01:04:24.938684458Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:24.939462 containerd[1492]: time="2024-10-09T01:04:24.939408065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:04:24.940257 containerd[1492]: time="2024-10-09T01:04:24.940206642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 01:04:24.942524 containerd[1492]: time="2024-10-09T01:04:24.942476127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:24.944525 containerd[1492]: time="2024-10-09T01:04:24.944489973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.041203807s" Oct 9 01:04:24.945230 containerd[1492]: time="2024-10-09T01:04:24.945203211Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.045303624s" Oct 9 01:04:24.945943 containerd[1492]: time="2024-10-09T01:04:24.945919003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.04021859s" Oct 9 01:04:25.024628 kubelet[2304]: E1009 01:04:25.024577 2304 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.110:6443: connect: connection refused Oct 9 01:04:25.082679 containerd[1492]: time="2024-10-09T01:04:25.082530419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:25.083167 containerd[1492]: time="2024-10-09T01:04:25.083079949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:25.083167 containerd[1492]: time="2024-10-09T01:04:25.083138539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:25.083167 containerd[1492]: time="2024-10-09T01:04:25.083149199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:25.083824 containerd[1492]: time="2024-10-09T01:04:25.083226845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:25.083956 containerd[1492]: time="2024-10-09T01:04:25.081723746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:25.083956 containerd[1492]: time="2024-10-09T01:04:25.083302166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:25.083956 containerd[1492]: time="2024-10-09T01:04:25.083322264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:25.083956 containerd[1492]: time="2024-10-09T01:04:25.083427391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:25.083956 containerd[1492]: time="2024-10-09T01:04:25.083848591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:25.083956 containerd[1492]: time="2024-10-09T01:04:25.083874679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:25.084146 containerd[1492]: time="2024-10-09T01:04:25.083971220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:25.115541 systemd[1]: Started cri-containerd-8934db8df95fc4decf15eff4bd75a2ddd07136bc7284326d1f0943dde52fb5a9.scope - libcontainer container 8934db8df95fc4decf15eff4bd75a2ddd07136bc7284326d1f0943dde52fb5a9. Oct 9 01:04:25.117228 systemd[1]: Started cri-containerd-8e36e9c322ca3fa3c142cb2949e7d2db0d96ee2784df270d0b4a603489608de8.scope - libcontainer container 8e36e9c322ca3fa3c142cb2949e7d2db0d96ee2784df270d0b4a603489608de8. Oct 9 01:04:25.118634 systemd[1]: Started cri-containerd-9bf9f4590fd4bba7674659f53d88836dbabdbbcb76fe800f7cdd53ac616eb283.scope - libcontainer container 9bf9f4590fd4bba7674659f53d88836dbabdbbcb76fe800f7cdd53ac616eb283. Oct 9 01:04:25.158031 containerd[1492]: time="2024-10-09T01:04:25.157792295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3fdb280ebbd1b18a096489a75f4bc1f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e36e9c322ca3fa3c142cb2949e7d2db0d96ee2784df270d0b4a603489608de8\"" Oct 9 01:04:25.160592 kubelet[2304]: E1009 01:04:25.160474 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:25.165890 containerd[1492]: time="2024-10-09T01:04:25.162868894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"8934db8df95fc4decf15eff4bd75a2ddd07136bc7284326d1f0943dde52fb5a9\"" Oct 9 01:04:25.165890 containerd[1492]: time="2024-10-09T01:04:25.165690795Z" level=info msg="CreateContainer within sandbox \"8e36e9c322ca3fa3c142cb2949e7d2db0d96ee2784df270d0b4a603489608de8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:04:25.166015 kubelet[2304]: E1009 01:04:25.164241 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:25.168600 containerd[1492]: time="2024-10-09T01:04:25.168564884Z" level=info msg="CreateContainer within sandbox \"8934db8df95fc4decf15eff4bd75a2ddd07136bc7284326d1f0943dde52fb5a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:04:25.168950 containerd[1492]: time="2024-10-09T01:04:25.168927013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf9f4590fd4bba7674659f53d88836dbabdbbcb76fe800f7cdd53ac616eb283\"" Oct 9 01:04:25.169692 kubelet[2304]: E1009 01:04:25.169668 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:25.171120 containerd[1492]: time="2024-10-09T01:04:25.171091401Z" level=info msg="CreateContainer within sandbox \"9bf9f4590fd4bba7674659f53d88836dbabdbbcb76fe800f7cdd53ac616eb283\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:04:25.196908 containerd[1492]: time="2024-10-09T01:04:25.196161297Z" level=info msg="CreateContainer within sandbox \"8e36e9c322ca3fa3c142cb2949e7d2db0d96ee2784df270d0b4a603489608de8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b82c82680af3ebf14730125820516c54f8cd2171cc8ce3ad6f34549e7e183475\"" Oct 9 01:04:25.196908 containerd[1492]: time="2024-10-09T01:04:25.196859586Z" level=info msg="StartContainer for \"b82c82680af3ebf14730125820516c54f8cd2171cc8ce3ad6f34549e7e183475\"" Oct 9 01:04:25.200763 containerd[1492]: time="2024-10-09T01:04:25.200725244Z" level=info msg="CreateContainer within sandbox \"8934db8df95fc4decf15eff4bd75a2ddd07136bc7284326d1f0943dde52fb5a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a5bac956ae3984499a09c4f8ac2b0b12e5d7272db3147980db42573fdde3943\"" Oct 9 01:04:25.201176 containerd[1492]: time="2024-10-09T01:04:25.201148408Z" level=info msg="StartContainer for \"7a5bac956ae3984499a09c4f8ac2b0b12e5d7272db3147980db42573fdde3943\"" Oct 9 01:04:25.204147 containerd[1492]: time="2024-10-09T01:04:25.204106093Z" level=info msg="CreateContainer within sandbox \"9bf9f4590fd4bba7674659f53d88836dbabdbbcb76fe800f7cdd53ac616eb283\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d058d324998c64c0a99b1f7f13abe6c11812a993346e314e1924920f0f751d14\"" Oct 9 01:04:25.204843 containerd[1492]: time="2024-10-09T01:04:25.204807619Z" level=info msg="StartContainer for \"d058d324998c64c0a99b1f7f13abe6c11812a993346e314e1924920f0f751d14\"" Oct 9 01:04:25.234514 systemd[1]: Started cri-containerd-7a5bac956ae3984499a09c4f8ac2b0b12e5d7272db3147980db42573fdde3943.scope - libcontainer container 7a5bac956ae3984499a09c4f8ac2b0b12e5d7272db3147980db42573fdde3943. Oct 9 01:04:25.236442 systemd[1]: Started cri-containerd-b82c82680af3ebf14730125820516c54f8cd2171cc8ce3ad6f34549e7e183475.scope - libcontainer container b82c82680af3ebf14730125820516c54f8cd2171cc8ce3ad6f34549e7e183475. Oct 9 01:04:25.241069 systemd[1]: Started cri-containerd-d058d324998c64c0a99b1f7f13abe6c11812a993346e314e1924920f0f751d14.scope - libcontainer container d058d324998c64c0a99b1f7f13abe6c11812a993346e314e1924920f0f751d14. Oct 9 01:04:25.288601 containerd[1492]: time="2024-10-09T01:04:25.288526882Z" level=info msg="StartContainer for \"7a5bac956ae3984499a09c4f8ac2b0b12e5d7272db3147980db42573fdde3943\" returns successfully" Oct 9 01:04:25.290490 containerd[1492]: time="2024-10-09T01:04:25.290453795Z" level=info msg="StartContainer for \"d058d324998c64c0a99b1f7f13abe6c11812a993346e314e1924920f0f751d14\" returns successfully" Oct 9 01:04:25.290569 containerd[1492]: time="2024-10-09T01:04:25.290456270Z" level=info msg="StartContainer for \"b82c82680af3ebf14730125820516c54f8cd2171cc8ce3ad6f34549e7e183475\" returns successfully" Oct 9 01:04:25.894560 kubelet[2304]: E1009 01:04:25.894520 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:25.895541 kubelet[2304]: E1009 01:04:25.895516 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:25.898462 kubelet[2304]: E1009 01:04:25.898437 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:25.973422 kubelet[2304]: I1009 01:04:25.972830 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:04:26.293082 kubelet[2304]: E1009 01:04:26.293038 2304 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 01:04:26.391928 kubelet[2304]: I1009 01:04:26.391882 2304 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:04:26.400678 kubelet[2304]: E1009 01:04:26.400638 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:26.501162 kubelet[2304]: E1009 01:04:26.501103 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:26.601353 kubelet[2304]: E1009 01:04:26.601195 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:26.701374 kubelet[2304]: E1009 01:04:26.701315 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:26.801962 kubelet[2304]: E1009 01:04:26.801904 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:26.900323 kubelet[2304]: E1009 01:04:26.900298 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:26.900459 kubelet[2304]: E1009 01:04:26.900298 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:26.902018 kubelet[2304]: E1009 01:04:26.901989 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:27.002675 kubelet[2304]: E1009 01:04:27.002606 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:27.103175 kubelet[2304]: E1009 01:04:27.103125 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:27.204021 kubelet[2304]: E1009 01:04:27.203875 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:27.853473 kubelet[2304]: I1009 01:04:27.853413 2304 apiserver.go:52] "Watching apiserver" Oct 9 01:04:27.864726 kubelet[2304]: I1009 01:04:27.864666 2304 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:04:27.944484 kubelet[2304]: E1009 01:04:27.944424 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:28.151229 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-7.scope)... Oct 9 01:04:28.151255 systemd[1]: Reloading... Oct 9 01:04:28.265380 zram_generator::config[2624]: No configuration found. Oct 9 01:04:28.442493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:04:28.573011 systemd[1]: Reloading finished in 421 ms. Oct 9 01:04:28.624452 kubelet[2304]: I1009 01:04:28.624355 2304 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:04:28.624461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:28.643613 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:04:28.644009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:28.644089 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 116.8M memory peak, 0B memory swap peak. Oct 9 01:04:28.653931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:28.810877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:28.824888 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:04:28.881220 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:28.881220 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:04:28.881220 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:28.881749 kubelet[2669]: I1009 01:04:28.881255 2669 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:04:28.886291 kubelet[2669]: I1009 01:04:28.886219 2669 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:04:28.886291 kubelet[2669]: I1009 01:04:28.886248 2669 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:04:28.886602 kubelet[2669]: I1009 01:04:28.886477 2669 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:04:28.887819 kubelet[2669]: I1009 01:04:28.887786 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:04:28.889043 kubelet[2669]: I1009 01:04:28.889000 2669 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:04:28.899260 kubelet[2669]: I1009 01:04:28.899221 2669 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:04:28.899598 kubelet[2669]: I1009 01:04:28.899547 2669 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:04:28.899851 kubelet[2669]: I1009 01:04:28.899589 2669 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:04:28.899954 kubelet[2669]: I1009 01:04:28.899862 2669 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:04:28.899954 kubelet[2669]: I1009 01:04:28.899878 2669 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:04:28.899954 kubelet[2669]: I1009 01:04:28.899941 2669 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:28.900085 kubelet[2669]: I1009 01:04:28.900063 2669 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:04:28.900120 kubelet[2669]: I1009 01:04:28.900085 2669 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:04:28.900156 kubelet[2669]: I1009 01:04:28.900121 2669 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:04:28.900156 kubelet[2669]: I1009 01:04:28.900146 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.900789 2669 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.900998 2669 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.901555 2669 server.go:1264] "Started kubelet" Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.901780 2669 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.901836 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.902158 2669 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:04:28.903208 kubelet[2669]: I1009 01:04:28.903168 2669 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:04:28.904253 kubelet[2669]: I1009 01:04:28.904227 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:04:28.906227 kubelet[2669]: E1009 01:04:28.906195 2669 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 01:04:28.906307 kubelet[2669]: I1009 01:04:28.906252 2669 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:04:28.908140 kubelet[2669]: I1009 01:04:28.907447 2669 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:04:28.908140 kubelet[2669]: I1009 01:04:28.907641 2669 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:04:28.916740 kubelet[2669]: I1009 01:04:28.914916 2669 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:04:28.916740 kubelet[2669]: I1009 01:04:28.915065 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:04:28.917670 kubelet[2669]: I1009 01:04:28.917629 2669 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:04:28.923095 kubelet[2669]: E1009 01:04:28.922083 2669 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:04:28.923486 kubelet[2669]: I1009 01:04:28.923431 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:04:28.926275 kubelet[2669]: I1009 01:04:28.926249 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:04:28.926327 kubelet[2669]: I1009 01:04:28.926284 2669 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:04:28.926327 kubelet[2669]: I1009 01:04:28.926306 2669 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:04:28.926442 kubelet[2669]: E1009 01:04:28.926396 2669 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:04:28.960185 kubelet[2669]: I1009 01:04:28.960150 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:04:28.960185 kubelet[2669]: I1009 01:04:28.960173 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:04:28.960185 kubelet[2669]: I1009 01:04:28.960204 2669 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:28.960495 kubelet[2669]: I1009 01:04:28.960477 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:04:28.960549 kubelet[2669]: I1009 01:04:28.960495 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:04:28.960549 kubelet[2669]: I1009 01:04:28.960519 2669 policy_none.go:49] "None policy: Start" Oct 9 01:04:28.961471 kubelet[2669]: I1009 01:04:28.961449 2669 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:04:28.961518 kubelet[2669]: I1009 01:04:28.961475 2669 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:04:28.961632 kubelet[2669]: I1009 01:04:28.961619 2669 state_mem.go:75] "Updated machine memory state" Oct 9 01:04:28.967790 kubelet[2669]: I1009 01:04:28.967731 2669 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:04:28.968196 kubelet[2669]: I1009 01:04:28.967978 2669 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:04:28.968196 kubelet[2669]: I1009 01:04:28.968115 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:04:29.011925 kubelet[2669]: I1009 01:04:29.011889 2669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:04:29.019796 kubelet[2669]: I1009 01:04:29.019545 2669 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 01:04:29.019796 kubelet[2669]: I1009 01:04:29.019669 2669 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:04:29.026820 kubelet[2669]: I1009 01:04:29.026632 2669 topology_manager.go:215] "Topology Admit Handler" podUID="3fdb280ebbd1b18a096489a75f4bc1f3" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:04:29.026820 kubelet[2669]: I1009 01:04:29.026792 2669 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:04:29.027030 kubelet[2669]: I1009 01:04:29.026855 2669 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:04:29.035016 kubelet[2669]: E1009 01:04:29.034966 2669 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:29.130437 sudo[2702]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 01:04:29.130880 sudo[2702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 01:04:29.209140 kubelet[2669]: I1009 01:04:29.209055 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:29.209140 kubelet[2669]: I1009 01:04:29.209124 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:29.209352 kubelet[2669]: I1009 01:04:29.209160 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:29.209352 kubelet[2669]: I1009 01:04:29.209193 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fdb280ebbd1b18a096489a75f4bc1f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3fdb280ebbd1b18a096489a75f4bc1f3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:29.209352 kubelet[2669]: I1009 01:04:29.209223 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:29.209352 kubelet[2669]: I1009 01:04:29.209249 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:04:29.209352 kubelet[2669]: I1009 01:04:29.209269 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fdb280ebbd1b18a096489a75f4bc1f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3fdb280ebbd1b18a096489a75f4bc1f3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:29.209531 kubelet[2669]: I1009 01:04:29.209290 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fdb280ebbd1b18a096489a75f4bc1f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3fdb280ebbd1b18a096489a75f4bc1f3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:29.209531 kubelet[2669]: I1009 01:04:29.209312 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:04:29.336567 kubelet[2669]: E1009 01:04:29.335723 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:29.336567 kubelet[2669]: E1009 01:04:29.336431 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:29.336769 kubelet[2669]: E1009 01:04:29.336574 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:29.593508 sudo[2702]: pam_unix(sudo:session): session closed for user root Oct 9 01:04:29.901547 kubelet[2669]: I1009 01:04:29.901497 2669 apiserver.go:52] "Watching apiserver" Oct 9 01:04:29.908183 kubelet[2669]: I1009 01:04:29.908145 2669 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:04:29.935254 kubelet[2669]: E1009 01:04:29.935216 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:29.938354 kubelet[2669]: E1009 01:04:29.936099 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:29.943608 kubelet[2669]: E1009 01:04:29.943561 2669 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:04:29.944359 kubelet[2669]: E1009 01:04:29.943979 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:29.983372 kubelet[2669]: I1009 01:04:29.983285 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.983266607 podStartE2EDuration="2.983266607s" podCreationTimestamp="2024-10-09 01:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:29.980379912 +0000 UTC m=+1.150792553" watchObservedRunningTime="2024-10-09 01:04:29.983266607 +0000 UTC m=+1.153679248" Oct 9 01:04:29.983533 kubelet[2669]: I1009 01:04:29.983408 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.983403508 podStartE2EDuration="983.403508ms" podCreationTimestamp="2024-10-09 01:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:29.969595632 +0000 UTC m=+1.140008273" watchObservedRunningTime="2024-10-09 01:04:29.983403508 +0000 UTC m=+1.153816139" Oct 9 01:04:29.990951 kubelet[2669]: I1009 01:04:29.990885 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.99086357 podStartE2EDuration="990.86357ms" podCreationTimestamp="2024-10-09 01:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:29.990565162 +0000 UTC m=+1.160977803" watchObservedRunningTime="2024-10-09 01:04:29.99086357 +0000 UTC m=+1.161276211" Oct 9 01:04:30.943056 kubelet[2669]: E1009 01:04:30.942934 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:31.773392 sudo[1673]: pam_unix(sudo:session): session closed for user root Oct 9 01:04:31.777487 sshd[1670]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:31.784255 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:44480.service: Deactivated successfully. Oct 9 01:04:31.789084 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:04:31.790222 systemd[1]: session-7.scope: Consumed 5.270s CPU time, 190.6M memory peak, 0B memory swap peak. Oct 9 01:04:31.791286 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:04:31.795021 systemd-logind[1481]: Removed session 7. Oct 9 01:04:31.946386 kubelet[2669]: E1009 01:04:31.945143 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:32.945793 kubelet[2669]: E1009 01:04:32.945762 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:37.977173 kubelet[2669]: E1009 01:04:37.977120 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:38.953020 kubelet[2669]: E1009 01:04:38.952977 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:39.277884 kubelet[2669]: E1009 01:04:39.277692 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:39.623585 update_engine[1484]: I20241009 01:04:39.623402 1484 update_attempter.cc:509] Updating boot flags... Oct 9 01:04:39.697801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2750) Oct 9 01:04:39.733369 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2754) Oct 9 01:04:39.767438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2754) Oct 9 01:04:39.954850 kubelet[2669]: E1009 01:04:39.954827 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:42.010919 kubelet[2669]: E1009 01:04:42.010852 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:42.959034 kubelet[2669]: E1009 01:04:42.958989 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.156572 kubelet[2669]: I1009 01:04:45.156115 2669 topology_manager.go:215] "Topology Admit Handler" podUID="6edfe223-49db-4451-899b-502fda347829" podNamespace="kube-system" podName="kube-proxy-xgld9" Oct 9 01:04:45.165053 kubelet[2669]: I1009 01:04:45.165009 2669 topology_manager.go:215] "Topology Admit Handler" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" podNamespace="kube-system" podName="cilium-ff72f" Oct 9 01:04:45.177012 systemd[1]: Created slice kubepods-besteffort-pod6edfe223_49db_4451_899b_502fda347829.slice - libcontainer container kubepods-besteffort-pod6edfe223_49db_4451_899b_502fda347829.slice. Oct 9 01:04:45.183437 systemd[1]: Created slice kubepods-burstable-poddf4c8b2f_2422_4c79_ba86_ee6d1e51aecb.slice - libcontainer container kubepods-burstable-poddf4c8b2f_2422_4c79_ba86_ee6d1e51aecb.slice. Oct 9 01:04:45.235361 kubelet[2669]: I1009 01:04:45.235290 2669 topology_manager.go:215] "Topology Admit Handler" podUID="20fbf6b2-9b5c-4bd3-8206-7d2875bf0958" podNamespace="kube-system" podName="cilium-operator-599987898-pmrsk" Oct 9 01:04:45.244520 systemd[1]: Created slice kubepods-besteffort-pod20fbf6b2_9b5c_4bd3_8206_7d2875bf0958.slice - libcontainer container kubepods-besteffort-pod20fbf6b2_9b5c_4bd3_8206_7d2875bf0958.slice. Oct 9 01:04:45.270899 kubelet[2669]: I1009 01:04:45.270557 2669 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:04:45.272069 containerd[1492]: time="2024-10-09T01:04:45.272038915Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:04:45.272443 kubelet[2669]: I1009 01:04:45.272272 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:04:45.317678 kubelet[2669]: I1009 01:04:45.317634 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hubble-tls\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317678 kubelet[2669]: I1009 01:04:45.317672 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hostproc\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317678 kubelet[2669]: I1009 01:04:45.317690 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-run\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317939 kubelet[2669]: I1009 01:04:45.317760 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-etc-cni-netd\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317939 kubelet[2669]: I1009 01:04:45.317830 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cni-path\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317939 kubelet[2669]: I1009 01:04:45.317860 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-lib-modules\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317939 kubelet[2669]: I1009 01:04:45.317880 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-clustermesh-secrets\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317939 kubelet[2669]: I1009 01:04:45.317902 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-cgroup\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.317939 kubelet[2669]: I1009 01:04:45.317925 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-xtables-lock\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.318127 kubelet[2669]: I1009 01:04:45.317946 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-config-path\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.318127 kubelet[2669]: I1009 01:04:45.317970 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-net\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.318127 kubelet[2669]: I1009 01:04:45.317992 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6edfe223-49db-4451-899b-502fda347829-lib-modules\") pod \"kube-proxy-xgld9\" (UID: \"6edfe223-49db-4451-899b-502fda347829\") " pod="kube-system/kube-proxy-xgld9" Oct 9 01:04:45.318127 kubelet[2669]: I1009 01:04:45.318016 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wvj\" (UniqueName: \"kubernetes.io/projected/6edfe223-49db-4451-899b-502fda347829-kube-api-access-t7wvj\") pod \"kube-proxy-xgld9\" (UID: \"6edfe223-49db-4451-899b-502fda347829\") " pod="kube-system/kube-proxy-xgld9" Oct 9 01:04:45.318127 kubelet[2669]: I1009 01:04:45.318045 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6edfe223-49db-4451-899b-502fda347829-kube-proxy\") pod \"kube-proxy-xgld9\" (UID: \"6edfe223-49db-4451-899b-502fda347829\") " pod="kube-system/kube-proxy-xgld9" Oct 9 01:04:45.318297 kubelet[2669]: I1009 01:04:45.318084 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6edfe223-49db-4451-899b-502fda347829-xtables-lock\") pod \"kube-proxy-xgld9\" (UID: \"6edfe223-49db-4451-899b-502fda347829\") " pod="kube-system/kube-proxy-xgld9" Oct 9 01:04:45.318297 kubelet[2669]: I1009 01:04:45.318108 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-kernel\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.318297 kubelet[2669]: I1009 01:04:45.318132 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sdp\" (UniqueName: \"kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-kube-api-access-72sdp\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.318297 kubelet[2669]: I1009 01:04:45.318148 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-bpf-maps\") pod \"cilium-ff72f\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " pod="kube-system/cilium-ff72f" Oct 9 01:04:45.420078 kubelet[2669]: I1009 01:04:45.419110 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75fz\" (UniqueName: \"kubernetes.io/projected/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-kube-api-access-l75fz\") pod \"cilium-operator-599987898-pmrsk\" (UID: \"20fbf6b2-9b5c-4bd3-8206-7d2875bf0958\") " pod="kube-system/cilium-operator-599987898-pmrsk" Oct 9 01:04:45.420078 kubelet[2669]: I1009 01:04:45.419146 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-cilium-config-path\") pod \"cilium-operator-599987898-pmrsk\" (UID: \"20fbf6b2-9b5c-4bd3-8206-7d2875bf0958\") " pod="kube-system/cilium-operator-599987898-pmrsk" Oct 9 01:04:45.492576 kubelet[2669]: E1009 01:04:45.492535 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.493851 containerd[1492]: time="2024-10-09T01:04:45.493054330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgld9,Uid:6edfe223-49db-4451-899b-502fda347829,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:45.493851 containerd[1492]: time="2024-10-09T01:04:45.493649273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ff72f,Uid:df4c8b2f-2422-4c79-ba86-ee6d1e51aecb,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:45.493949 kubelet[2669]: E1009 01:04:45.493183 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.533203 containerd[1492]: time="2024-10-09T01:04:45.533055112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:45.533203 containerd[1492]: time="2024-10-09T01:04:45.533185477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:45.533356 containerd[1492]: time="2024-10-09T01:04:45.533247124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:45.533536 containerd[1492]: time="2024-10-09T01:04:45.533435107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:45.539183 containerd[1492]: time="2024-10-09T01:04:45.538874542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:45.539183 containerd[1492]: time="2024-10-09T01:04:45.538941829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:45.539183 containerd[1492]: time="2024-10-09T01:04:45.538952880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:45.539183 containerd[1492]: time="2024-10-09T01:04:45.539018193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:45.548731 kubelet[2669]: E1009 01:04:45.548701 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.550206 containerd[1492]: time="2024-10-09T01:04:45.550115704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pmrsk,Uid:20fbf6b2-9b5c-4bd3-8206-7d2875bf0958,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:45.554560 systemd[1]: Started cri-containerd-ce3a9db8ba320e96537b134d195e5a0082ff497904fe90ef1a653b59a742130d.scope - libcontainer container ce3a9db8ba320e96537b134d195e5a0082ff497904fe90ef1a653b59a742130d. Oct 9 01:04:45.558529 systemd[1]: Started cri-containerd-51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891.scope - libcontainer container 51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891. Oct 9 01:04:45.586056 containerd[1492]: time="2024-10-09T01:04:45.586012539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgld9,Uid:6edfe223-49db-4451-899b-502fda347829,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3a9db8ba320e96537b134d195e5a0082ff497904fe90ef1a653b59a742130d\"" Oct 9 01:04:45.589788 kubelet[2669]: E1009 01:04:45.589761 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.592779 containerd[1492]: time="2024-10-09T01:04:45.592448763Z" level=info msg="CreateContainer within sandbox \"ce3a9db8ba320e96537b134d195e5a0082ff497904fe90ef1a653b59a742130d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:04:45.595056 containerd[1492]: time="2024-10-09T01:04:45.594609356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:45.595056 containerd[1492]: time="2024-10-09T01:04:45.594696911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:45.595056 containerd[1492]: time="2024-10-09T01:04:45.594719143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:45.595056 containerd[1492]: time="2024-10-09T01:04:45.594813280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:45.598533 containerd[1492]: time="2024-10-09T01:04:45.598496364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ff72f,Uid:df4c8b2f-2422-4c79-ba86-ee6d1e51aecb,Namespace:kube-system,Attempt:0,} returns sandbox id \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\"" Oct 9 01:04:45.601080 kubelet[2669]: E1009 01:04:45.600872 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.602614 containerd[1492]: time="2024-10-09T01:04:45.602498809Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 01:04:45.619523 systemd[1]: Started cri-containerd-3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5.scope - libcontainer container 3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5. Oct 9 01:04:45.623487 containerd[1492]: time="2024-10-09T01:04:45.623438595Z" level=info msg="CreateContainer within sandbox \"ce3a9db8ba320e96537b134d195e5a0082ff497904fe90ef1a653b59a742130d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b4d8618cfa4f4d498a99434e51380d671f2a6ae2498b285e1ebb4d810afc247\"" Oct 9 01:04:45.624174 containerd[1492]: time="2024-10-09T01:04:45.624147643Z" level=info msg="StartContainer for \"8b4d8618cfa4f4d498a99434e51380d671f2a6ae2498b285e1ebb4d810afc247\"" Oct 9 01:04:45.654723 systemd[1]: Started cri-containerd-8b4d8618cfa4f4d498a99434e51380d671f2a6ae2498b285e1ebb4d810afc247.scope - libcontainer container 8b4d8618cfa4f4d498a99434e51380d671f2a6ae2498b285e1ebb4d810afc247. Oct 9 01:04:45.663595 containerd[1492]: time="2024-10-09T01:04:45.663506984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pmrsk,Uid:20fbf6b2-9b5c-4bd3-8206-7d2875bf0958,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5\"" Oct 9 01:04:45.664262 kubelet[2669]: E1009 01:04:45.664229 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.693249 containerd[1492]: time="2024-10-09T01:04:45.693137013Z" level=info msg="StartContainer for \"8b4d8618cfa4f4d498a99434e51380d671f2a6ae2498b285e1ebb4d810afc247\" returns successfully" Oct 9 01:04:45.965296 kubelet[2669]: E1009 01:04:45.965173 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:45.973260 kubelet[2669]: I1009 01:04:45.973199 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgld9" podStartSLOduration=0.973181894 podStartE2EDuration="973.181894ms" podCreationTimestamp="2024-10-09 01:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:45.972658397 +0000 UTC m=+17.143071048" watchObservedRunningTime="2024-10-09 01:04:45.973181894 +0000 UTC m=+17.143594535" Oct 9 01:04:52.472710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608702288.mount: Deactivated successfully. Oct 9 01:04:55.527306 containerd[1492]: time="2024-10-09T01:04:55.527221469Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:55.528013 containerd[1492]: time="2024-10-09T01:04:55.527977641Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735327" Oct 9 01:04:55.529201 containerd[1492]: time="2024-10-09T01:04:55.529149504Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:55.530582 containerd[1492]: time="2024-10-09T01:04:55.530555578Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.928023506s" Oct 9 01:04:55.530639 containerd[1492]: time="2024-10-09T01:04:55.530583901Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 9 01:04:55.532275 containerd[1492]: time="2024-10-09T01:04:55.532226851Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 01:04:55.537868 containerd[1492]: time="2024-10-09T01:04:55.537815329Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 01:04:55.551548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950137363.mount: Deactivated successfully. Oct 9 01:04:55.552900 containerd[1492]: time="2024-10-09T01:04:55.552860169Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\"" Oct 9 01:04:55.553472 containerd[1492]: time="2024-10-09T01:04:55.553428387Z" level=info msg="StartContainer for \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\"" Oct 9 01:04:55.582533 systemd[1]: Started cri-containerd-456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36.scope - libcontainer container 456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36. Oct 9 01:04:55.610389 containerd[1492]: time="2024-10-09T01:04:55.610316338Z" level=info msg="StartContainer for \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\" returns successfully" Oct 9 01:04:55.621610 systemd[1]: cri-containerd-456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36.scope: Deactivated successfully. Oct 9 01:04:55.789269 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:43694.service - OpenSSH per-connection server daemon (10.0.0.1:43694). Oct 9 01:04:55.842690 sshd[3124]: Accepted publickey for core from 10.0.0.1 port 43694 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:04:55.844442 sshd[3124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:04:55.848690 systemd-logind[1481]: New session 8 of user core. Oct 9 01:04:55.849616 containerd[1492]: time="2024-10-09T01:04:55.849489691Z" level=info msg="shim disconnected" id=456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36 namespace=k8s.io Oct 9 01:04:55.849616 containerd[1492]: time="2024-10-09T01:04:55.849614146Z" level=warning msg="cleaning up after shim disconnected" id=456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36 namespace=k8s.io Oct 9 01:04:55.849732 containerd[1492]: time="2024-10-09T01:04:55.849623663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:04:55.855529 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:04:55.975215 sshd[3124]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:55.979369 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:43694.service: Deactivated successfully. Oct 9 01:04:55.981393 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:04:55.982289 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:04:55.983262 systemd-logind[1481]: Removed session 8. Oct 9 01:04:55.984134 kubelet[2669]: E1009 01:04:55.984102 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:55.986060 containerd[1492]: time="2024-10-09T01:04:55.986026221Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 01:04:56.002420 containerd[1492]: time="2024-10-09T01:04:56.002364742Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\"" Oct 9 01:04:56.002810 containerd[1492]: time="2024-10-09T01:04:56.002785313Z" level=info msg="StartContainer for \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\"" Oct 9 01:04:56.026473 systemd[1]: Started cri-containerd-49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21.scope - libcontainer container 49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21. Oct 9 01:04:56.060646 containerd[1492]: time="2024-10-09T01:04:56.060532770Z" level=info msg="StartContainer for \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\" returns successfully" Oct 9 01:04:56.070567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:04:56.070886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:04:56.070967 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:04:56.081909 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:04:56.082290 systemd[1]: cri-containerd-49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21.scope: Deactivated successfully. Oct 9 01:04:56.100799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:04:56.105881 containerd[1492]: time="2024-10-09T01:04:56.105825031Z" level=info msg="shim disconnected" id=49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21 namespace=k8s.io Oct 9 01:04:56.106010 containerd[1492]: time="2024-10-09T01:04:56.105882379Z" level=warning msg="cleaning up after shim disconnected" id=49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21 namespace=k8s.io Oct 9 01:04:56.106010 containerd[1492]: time="2024-10-09T01:04:56.105891286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:04:56.548803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36-rootfs.mount: Deactivated successfully. Oct 9 01:04:56.724495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282314563.mount: Deactivated successfully. Oct 9 01:04:56.984585 containerd[1492]: time="2024-10-09T01:04:56.984525838Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:56.985373 containerd[1492]: time="2024-10-09T01:04:56.985323718Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907233" Oct 9 01:04:56.986482 containerd[1492]: time="2024-10-09T01:04:56.986451858Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:56.987263 kubelet[2669]: E1009 01:04:56.987238 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:56.988509 containerd[1492]: time="2024-10-09T01:04:56.988481144Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.456222514s" Oct 9 01:04:56.988553 containerd[1492]: time="2024-10-09T01:04:56.988507854Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 9 01:04:56.990358 containerd[1492]: time="2024-10-09T01:04:56.990272452Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 01:04:56.990692 containerd[1492]: time="2024-10-09T01:04:56.990659700Z" level=info msg="CreateContainer within sandbox \"3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 01:04:57.004600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311418373.mount: Deactivated successfully. Oct 9 01:04:57.013747 containerd[1492]: time="2024-10-09T01:04:57.013698720Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\"" Oct 9 01:04:57.014309 containerd[1492]: time="2024-10-09T01:04:57.014274983Z" level=info msg="StartContainer for \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\"" Oct 9 01:04:57.015597 containerd[1492]: time="2024-10-09T01:04:57.015566030Z" level=info msg="CreateContainer within sandbox \"3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\"" Oct 9 01:04:57.015986 containerd[1492]: time="2024-10-09T01:04:57.015920636Z" level=info msg="StartContainer for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\"" Oct 9 01:04:57.047541 systemd[1]: Started cri-containerd-690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0.scope - libcontainer container 690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0. Oct 9 01:04:57.053925 systemd[1]: Started cri-containerd-4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37.scope - libcontainer container 4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37. Oct 9 01:04:57.089765 systemd[1]: cri-containerd-690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0.scope: Deactivated successfully. Oct 9 01:04:57.090538 containerd[1492]: time="2024-10-09T01:04:57.090493873Z" level=info msg="StartContainer for \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\" returns successfully" Oct 9 01:04:57.090795 containerd[1492]: time="2024-10-09T01:04:57.090703166Z" level=info msg="StartContainer for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" returns successfully" Oct 9 01:04:57.330265 containerd[1492]: time="2024-10-09T01:04:57.330105496Z" level=info msg="shim disconnected" id=690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0 namespace=k8s.io Oct 9 01:04:57.330265 containerd[1492]: time="2024-10-09T01:04:57.330170998Z" level=warning msg="cleaning up after shim disconnected" id=690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0 namespace=k8s.io Oct 9 01:04:57.330265 containerd[1492]: time="2024-10-09T01:04:57.330179595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:04:57.989893 kubelet[2669]: E1009 01:04:57.989846 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:57.991684 containerd[1492]: time="2024-10-09T01:04:57.991529927Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 01:04:57.992058 kubelet[2669]: E1009 01:04:57.991583 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:58.012174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529580623.mount: Deactivated successfully. Oct 9 01:04:58.012958 containerd[1492]: time="2024-10-09T01:04:58.012917576Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\"" Oct 9 01:04:58.013481 containerd[1492]: time="2024-10-09T01:04:58.013454055Z" level=info msg="StartContainer for \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\"" Oct 9 01:04:58.019946 kubelet[2669]: I1009 01:04:58.019875 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pmrsk" podStartSLOduration=1.695901383 podStartE2EDuration="13.019857359s" podCreationTimestamp="2024-10-09 01:04:45 +0000 UTC" firstStartedPulling="2024-10-09 01:04:45.665221126 +0000 UTC m=+16.835633767" lastFinishedPulling="2024-10-09 01:04:56.989177102 +0000 UTC m=+28.159589743" observedRunningTime="2024-10-09 01:04:58.019823405 +0000 UTC m=+29.190236077" watchObservedRunningTime="2024-10-09 01:04:58.019857359 +0000 UTC m=+29.190270010" Oct 9 01:04:58.078466 systemd[1]: Started cri-containerd-68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6.scope - libcontainer container 68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6. Oct 9 01:04:58.102019 systemd[1]: cri-containerd-68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6.scope: Deactivated successfully. Oct 9 01:04:58.105073 containerd[1492]: time="2024-10-09T01:04:58.105033371Z" level=info msg="StartContainer for \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\" returns successfully" Oct 9 01:04:58.126545 containerd[1492]: time="2024-10-09T01:04:58.126488254Z" level=info msg="shim disconnected" id=68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6 namespace=k8s.io Oct 9 01:04:58.126545 containerd[1492]: time="2024-10-09T01:04:58.126540793Z" level=warning msg="cleaning up after shim disconnected" id=68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6 namespace=k8s.io Oct 9 01:04:58.126545 containerd[1492]: time="2024-10-09T01:04:58.126548327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:04:58.549148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6-rootfs.mount: Deactivated successfully. Oct 9 01:04:59.042128 kubelet[2669]: E1009 01:04:59.042082 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:59.043509 kubelet[2669]: E1009 01:04:59.042724 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:59.045301 containerd[1492]: time="2024-10-09T01:04:59.045254402Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 01:04:59.088456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016333108.mount: Deactivated successfully. Oct 9 01:04:59.090448 containerd[1492]: time="2024-10-09T01:04:59.090403975Z" level=info msg="CreateContainer within sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\"" Oct 9 01:04:59.090914 containerd[1492]: time="2024-10-09T01:04:59.090877825Z" level=info msg="StartContainer for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\"" Oct 9 01:04:59.120478 systemd[1]: Started cri-containerd-71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61.scope - libcontainer container 71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61. Oct 9 01:04:59.156276 containerd[1492]: time="2024-10-09T01:04:59.156227376Z" level=info msg="StartContainer for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" returns successfully" Oct 9 01:04:59.303302 kubelet[2669]: I1009 01:04:59.303196 2669 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:04:59.325769 kubelet[2669]: I1009 01:04:59.325142 2669 topology_manager.go:215] "Topology Admit Handler" podUID="0f38e106-3cb7-4287-8f0a-2c69b74deda5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bqs6t" Oct 9 01:04:59.327263 kubelet[2669]: I1009 01:04:59.327215 2669 topology_manager.go:215] "Topology Admit Handler" podUID="2d649119-cd59-47d0-b099-29b8e28702a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f4nhl" Oct 9 01:04:59.339152 systemd[1]: Created slice kubepods-burstable-pod0f38e106_3cb7_4287_8f0a_2c69b74deda5.slice - libcontainer container kubepods-burstable-pod0f38e106_3cb7_4287_8f0a_2c69b74deda5.slice. Oct 9 01:04:59.348248 systemd[1]: Created slice kubepods-burstable-pod2d649119_cd59_47d0_b099_29b8e28702a3.slice - libcontainer container kubepods-burstable-pod2d649119_cd59_47d0_b099_29b8e28702a3.slice. Oct 9 01:04:59.511285 kubelet[2669]: I1009 01:04:59.511241 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kqb\" (UniqueName: \"kubernetes.io/projected/0f38e106-3cb7-4287-8f0a-2c69b74deda5-kube-api-access-n9kqb\") pod \"coredns-7db6d8ff4d-bqs6t\" (UID: \"0f38e106-3cb7-4287-8f0a-2c69b74deda5\") " pod="kube-system/coredns-7db6d8ff4d-bqs6t" Oct 9 01:04:59.511285 kubelet[2669]: I1009 01:04:59.511293 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmpl\" (UniqueName: \"kubernetes.io/projected/2d649119-cd59-47d0-b099-29b8e28702a3-kube-api-access-bfmpl\") pod \"coredns-7db6d8ff4d-f4nhl\" (UID: \"2d649119-cd59-47d0-b099-29b8e28702a3\") " pod="kube-system/coredns-7db6d8ff4d-f4nhl" Oct 9 01:04:59.511463 kubelet[2669]: I1009 01:04:59.511323 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f38e106-3cb7-4287-8f0a-2c69b74deda5-config-volume\") pod \"coredns-7db6d8ff4d-bqs6t\" (UID: \"0f38e106-3cb7-4287-8f0a-2c69b74deda5\") " pod="kube-system/coredns-7db6d8ff4d-bqs6t" Oct 9 01:04:59.511463 kubelet[2669]: I1009 01:04:59.511428 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d649119-cd59-47d0-b099-29b8e28702a3-config-volume\") pod \"coredns-7db6d8ff4d-f4nhl\" (UID: \"2d649119-cd59-47d0-b099-29b8e28702a3\") " pod="kube-system/coredns-7db6d8ff4d-f4nhl" Oct 9 01:04:59.548742 systemd[1]: run-containerd-runc-k8s.io-71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61-runc.M4O298.mount: Deactivated successfully. Oct 9 01:04:59.647053 kubelet[2669]: E1009 01:04:59.647023 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:59.647553 containerd[1492]: time="2024-10-09T01:04:59.647525227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqs6t,Uid:0f38e106-3cb7-4287-8f0a-2c69b74deda5,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:59.654991 kubelet[2669]: E1009 01:04:59.654942 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:04:59.655520 containerd[1492]: time="2024-10-09T01:04:59.655481658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f4nhl,Uid:2d649119-cd59-47d0-b099-29b8e28702a3,Namespace:kube-system,Attempt:0,}" Oct 9 01:05:00.045691 kubelet[2669]: E1009 01:05:00.045578 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:00.989437 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:38814.service - OpenSSH per-connection server daemon (10.0.0.1:38814). Oct 9 01:05:01.026458 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 38814 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:01.028256 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:01.032194 systemd-logind[1481]: New session 9 of user core. Oct 9 01:05:01.047492 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:05:01.047687 kubelet[2669]: E1009 01:05:01.047569 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:01.160975 sshd[3527]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:01.165067 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:38814.service: Deactivated successfully. Oct 9 01:05:01.167073 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:05:01.167760 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:05:01.168778 systemd-logind[1481]: Removed session 9. Oct 9 01:05:01.382894 systemd-networkd[1399]: cilium_host: Link UP Oct 9 01:05:01.383054 systemd-networkd[1399]: cilium_net: Link UP Oct 9 01:05:01.383241 systemd-networkd[1399]: cilium_net: Gained carrier Oct 9 01:05:01.384400 systemd-networkd[1399]: cilium_host: Gained carrier Oct 9 01:05:01.384766 systemd-networkd[1399]: cilium_net: Gained IPv6LL Oct 9 01:05:01.491827 systemd-networkd[1399]: cilium_vxlan: Link UP Oct 9 01:05:01.491836 systemd-networkd[1399]: cilium_vxlan: Gained carrier Oct 9 01:05:01.715377 kernel: NET: Registered PF_ALG protocol family Oct 9 01:05:01.959504 systemd-networkd[1399]: cilium_host: Gained IPv6LL Oct 9 01:05:02.049494 kubelet[2669]: E1009 01:05:02.049368 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:02.397957 systemd-networkd[1399]: lxc_health: Link UP Oct 9 01:05:02.409189 systemd-networkd[1399]: lxc_health: Gained carrier Oct 9 01:05:02.727445 systemd-networkd[1399]: lxcd6595df8daa4: Link UP Oct 9 01:05:02.734363 kernel: eth0: renamed from tmp8635f Oct 9 01:05:02.738026 systemd-networkd[1399]: lxcd6595df8daa4: Gained carrier Oct 9 01:05:02.745326 systemd-networkd[1399]: lxc2f4f064124cf: Link UP Oct 9 01:05:02.751450 kernel: eth0: renamed from tmp1a867 Oct 9 01:05:02.754205 systemd-networkd[1399]: lxc2f4f064124cf: Gained carrier Oct 9 01:05:02.792563 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Oct 9 01:05:03.496355 kubelet[2669]: E1009 01:05:03.496306 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:03.541480 kubelet[2669]: I1009 01:05:03.540735 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ff72f" podStartSLOduration=8.610618954 podStartE2EDuration="18.540711444s" podCreationTimestamp="2024-10-09 01:04:45 +0000 UTC" firstStartedPulling="2024-10-09 01:04:45.601979129 +0000 UTC m=+16.772391770" lastFinishedPulling="2024-10-09 01:04:55.532071619 +0000 UTC m=+26.702484260" observedRunningTime="2024-10-09 01:05:00.059843909 +0000 UTC m=+31.230256560" watchObservedRunningTime="2024-10-09 01:05:03.540711444 +0000 UTC m=+34.711124075" Oct 9 01:05:04.392605 systemd-networkd[1399]: lxc_health: Gained IPv6LL Oct 9 01:05:04.393004 systemd-networkd[1399]: lxc2f4f064124cf: Gained IPv6LL Oct 9 01:05:04.650454 systemd-networkd[1399]: lxcd6595df8daa4: Gained IPv6LL Oct 9 01:05:06.177963 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:38820.service - OpenSSH per-connection server daemon (10.0.0.1:38820). Oct 9 01:05:06.236962 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 38820 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:06.238418 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:06.243976 systemd-logind[1481]: New session 10 of user core. Oct 9 01:05:06.253584 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:05:06.398382 sshd[3919]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:06.409327 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:05:06.412723 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:38820.service: Deactivated successfully. Oct 9 01:05:06.416936 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:05:06.418160 systemd-logind[1481]: Removed session 10. Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451745766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451809285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451826758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451904443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451420184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451487561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451500976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:06.452254 containerd[1492]: time="2024-10-09T01:05:06.451593400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:06.482615 systemd[1]: Started cri-containerd-1a8674d376ad9ce639005c91cd92f9db522107d7e59e86bcd3d40e45429c7415.scope - libcontainer container 1a8674d376ad9ce639005c91cd92f9db522107d7e59e86bcd3d40e45429c7415. Oct 9 01:05:06.484981 systemd[1]: Started cri-containerd-8635fedf5d7304da8df72726fc8a4f9fb6ce83068da21d98a00173610d5f952f.scope - libcontainer container 8635fedf5d7304da8df72726fc8a4f9fb6ce83068da21d98a00173610d5f952f. Oct 9 01:05:06.499432 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:05:06.501404 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:05:06.527233 containerd[1492]: time="2024-10-09T01:05:06.527178521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f4nhl,Uid:2d649119-cd59-47d0-b099-29b8e28702a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a8674d376ad9ce639005c91cd92f9db522107d7e59e86bcd3d40e45429c7415\"" Oct 9 01:05:06.528822 kubelet[2669]: E1009 01:05:06.528790 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:06.532066 containerd[1492]: time="2024-10-09T01:05:06.532022436Z" level=info msg="CreateContainer within sandbox \"1a8674d376ad9ce639005c91cd92f9db522107d7e59e86bcd3d40e45429c7415\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:05:06.533521 containerd[1492]: time="2024-10-09T01:05:06.533493939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqs6t,Uid:0f38e106-3cb7-4287-8f0a-2c69b74deda5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8635fedf5d7304da8df72726fc8a4f9fb6ce83068da21d98a00173610d5f952f\"" Oct 9 01:05:06.534711 kubelet[2669]: E1009 01:05:06.534683 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:06.540002 containerd[1492]: time="2024-10-09T01:05:06.539949510Z" level=info msg="CreateContainer within sandbox \"8635fedf5d7304da8df72726fc8a4f9fb6ce83068da21d98a00173610d5f952f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:05:06.566436 containerd[1492]: time="2024-10-09T01:05:06.566390688Z" level=info msg="CreateContainer within sandbox \"8635fedf5d7304da8df72726fc8a4f9fb6ce83068da21d98a00173610d5f952f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1972f048a8f1374149eab300ff58515a26380ef33bc18c7d230caf92e56be532\"" Oct 9 01:05:06.566900 containerd[1492]: time="2024-10-09T01:05:06.566877451Z" level=info msg="StartContainer for \"1972f048a8f1374149eab300ff58515a26380ef33bc18c7d230caf92e56be532\"" Oct 9 01:05:06.568817 containerd[1492]: time="2024-10-09T01:05:06.568774754Z" level=info msg="CreateContainer within sandbox \"1a8674d376ad9ce639005c91cd92f9db522107d7e59e86bcd3d40e45429c7415\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ed9b4fdc3650bbf7ca7615649a84be2cd84cb9b0cf67890fb435b66b46136d9\"" Oct 9 01:05:06.569187 containerd[1492]: time="2024-10-09T01:05:06.569164315Z" level=info msg="StartContainer for \"9ed9b4fdc3650bbf7ca7615649a84be2cd84cb9b0cf67890fb435b66b46136d9\"" Oct 9 01:05:06.602576 systemd[1]: Started cri-containerd-1972f048a8f1374149eab300ff58515a26380ef33bc18c7d230caf92e56be532.scope - libcontainer container 1972f048a8f1374149eab300ff58515a26380ef33bc18c7d230caf92e56be532. Oct 9 01:05:06.606645 systemd[1]: Started cri-containerd-9ed9b4fdc3650bbf7ca7615649a84be2cd84cb9b0cf67890fb435b66b46136d9.scope - libcontainer container 9ed9b4fdc3650bbf7ca7615649a84be2cd84cb9b0cf67890fb435b66b46136d9. Oct 9 01:05:06.636541 containerd[1492]: time="2024-10-09T01:05:06.636461485Z" level=info msg="StartContainer for \"9ed9b4fdc3650bbf7ca7615649a84be2cd84cb9b0cf67890fb435b66b46136d9\" returns successfully" Oct 9 01:05:06.642523 containerd[1492]: time="2024-10-09T01:05:06.642467642Z" level=info msg="StartContainer for \"1972f048a8f1374149eab300ff58515a26380ef33bc18c7d230caf92e56be532\" returns successfully" Oct 9 01:05:07.061589 kubelet[2669]: E1009 01:05:07.061533 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:07.062933 kubelet[2669]: E1009 01:05:07.062894 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:07.071164 kubelet[2669]: I1009 01:05:07.071083 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bqs6t" podStartSLOduration=22.071064151 podStartE2EDuration="22.071064151s" podCreationTimestamp="2024-10-09 01:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:05:07.070551599 +0000 UTC m=+38.240964240" watchObservedRunningTime="2024-10-09 01:05:07.071064151 +0000 UTC m=+38.241476792" Oct 9 01:05:07.094466 kubelet[2669]: I1009 01:05:07.094399 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f4nhl" podStartSLOduration=22.094378804 podStartE2EDuration="22.094378804s" podCreationTimestamp="2024-10-09 01:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:05:07.093806049 +0000 UTC m=+38.264218700" watchObservedRunningTime="2024-10-09 01:05:07.094378804 +0000 UTC m=+38.264791445" Oct 9 01:05:07.457998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942399370.mount: Deactivated successfully. Oct 9 01:05:08.064047 kubelet[2669]: E1009 01:05:08.063954 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:08.064047 kubelet[2669]: E1009 01:05:08.063952 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:09.065943 kubelet[2669]: E1009 01:05:09.065883 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:11.411475 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:44920.service - OpenSSH per-connection server daemon (10.0.0.1:44920). Oct 9 01:05:11.445945 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 44920 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:11.447572 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:11.451177 systemd-logind[1481]: New session 11 of user core. Oct 9 01:05:11.462448 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:05:11.567310 sshd[4109]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:11.579131 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:44920.service: Deactivated successfully. Oct 9 01:05:11.581070 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:05:11.582527 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:05:11.588587 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:44928.service - OpenSSH per-connection server daemon (10.0.0.1:44928). Oct 9 01:05:11.589460 systemd-logind[1481]: Removed session 11. Oct 9 01:05:11.615131 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 44928 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:11.616603 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:11.620085 systemd-logind[1481]: New session 12 of user core. Oct 9 01:05:11.625476 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:05:11.760061 sshd[4124]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:11.774628 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:44928.service: Deactivated successfully. Oct 9 01:05:11.779214 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:05:11.781554 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:05:11.793750 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:44936.service - OpenSSH per-connection server daemon (10.0.0.1:44936). Oct 9 01:05:11.794742 systemd-logind[1481]: Removed session 12. Oct 9 01:05:11.823549 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 44936 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:11.824942 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:11.828795 systemd-logind[1481]: New session 13 of user core. Oct 9 01:05:11.834459 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:05:11.935761 sshd[4137]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:11.939323 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:44936.service: Deactivated successfully. Oct 9 01:05:11.941302 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:05:11.942399 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:05:11.943411 systemd-logind[1481]: Removed session 13. Oct 9 01:05:12.623414 kubelet[2669]: I1009 01:05:12.623363 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:05:12.624293 kubelet[2669]: E1009 01:05:12.624087 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:13.070662 kubelet[2669]: E1009 01:05:13.070622 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:16.952532 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:44938.service - OpenSSH per-connection server daemon (10.0.0.1:44938). Oct 9 01:05:16.984485 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 44938 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:16.985958 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:16.989876 systemd-logind[1481]: New session 14 of user core. Oct 9 01:05:17.000468 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:05:17.109220 sshd[4154]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:17.113690 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:44938.service: Deactivated successfully. Oct 9 01:05:17.115915 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:05:17.116570 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:05:17.117494 systemd-logind[1481]: Removed session 14. Oct 9 01:05:22.120647 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:34670.service - OpenSSH per-connection server daemon (10.0.0.1:34670). Oct 9 01:05:22.154042 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 34670 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:22.155668 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:22.159493 systemd-logind[1481]: New session 15 of user core. Oct 9 01:05:22.169652 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:05:22.296864 sshd[4169]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:22.310373 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:34670.service: Deactivated successfully. Oct 9 01:05:22.313321 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:05:22.315745 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:05:22.325132 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:34682.service - OpenSSH per-connection server daemon (10.0.0.1:34682). Oct 9 01:05:22.326400 systemd-logind[1481]: Removed session 15. Oct 9 01:05:22.354781 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 34682 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:22.356463 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:22.361505 systemd-logind[1481]: New session 16 of user core. Oct 9 01:05:22.371534 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:05:22.559734 sshd[4183]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:22.574345 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:34682.service: Deactivated successfully. Oct 9 01:05:22.576107 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:05:22.577655 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:05:22.583600 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). Oct 9 01:05:22.584594 systemd-logind[1481]: Removed session 16. Oct 9 01:05:22.612124 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:22.613453 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:22.617479 systemd-logind[1481]: New session 17 of user core. Oct 9 01:05:22.630581 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:05:23.847110 sshd[4195]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:23.859289 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:34688.service: Deactivated successfully. Oct 9 01:05:23.862650 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:05:23.864528 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:05:23.870597 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:34698.service - OpenSSH per-connection server daemon (10.0.0.1:34698). Oct 9 01:05:23.871127 systemd-logind[1481]: Removed session 17. Oct 9 01:05:23.909092 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 34698 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:23.910924 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:23.914996 systemd-logind[1481]: New session 18 of user core. Oct 9 01:05:23.924441 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:05:24.149438 sshd[4215]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:24.160517 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:34698.service: Deactivated successfully. Oct 9 01:05:24.162470 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:05:24.164082 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:05:24.165580 systemd[1]: Started sshd@18-10.0.0.110:22-10.0.0.1:34700.service - OpenSSH per-connection server daemon (10.0.0.1:34700). Oct 9 01:05:24.166556 systemd-logind[1481]: Removed session 18. Oct 9 01:05:24.210282 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 34700 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:24.211741 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:24.215730 systemd-logind[1481]: New session 19 of user core. Oct 9 01:05:24.220559 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:05:24.324311 sshd[4228]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:24.328879 systemd[1]: sshd@18-10.0.0.110:22-10.0.0.1:34700.service: Deactivated successfully. Oct 9 01:05:24.331603 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:05:24.332457 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:05:24.333716 systemd-logind[1481]: Removed session 19. Oct 9 01:05:29.355805 systemd[1]: Started sshd@19-10.0.0.110:22-10.0.0.1:45516.service - OpenSSH per-connection server daemon (10.0.0.1:45516). Oct 9 01:05:29.408794 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 45516 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:29.411891 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:29.420956 systemd-logind[1481]: New session 20 of user core. Oct 9 01:05:29.432793 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:05:29.609058 sshd[4244]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:29.617155 systemd[1]: sshd@19-10.0.0.110:22-10.0.0.1:45516.service: Deactivated successfully. Oct 9 01:05:29.619934 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:05:29.625035 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:05:29.629960 systemd-logind[1481]: Removed session 20. Oct 9 01:05:34.625343 systemd[1]: Started sshd@20-10.0.0.110:22-10.0.0.1:45526.service - OpenSSH per-connection server daemon (10.0.0.1:45526). Oct 9 01:05:34.698830 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 45526 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:34.701110 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:34.733944 systemd-logind[1481]: New session 21 of user core. Oct 9 01:05:34.748701 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:05:34.935788 sshd[4261]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:34.940567 systemd[1]: sshd@20-10.0.0.110:22-10.0.0.1:45526.service: Deactivated successfully. Oct 9 01:05:34.947211 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:05:34.948316 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:05:34.949551 systemd-logind[1481]: Removed session 21. Oct 9 01:05:39.948203 systemd[1]: Started sshd@21-10.0.0.110:22-10.0.0.1:48126.service - OpenSSH per-connection server daemon (10.0.0.1:48126). Oct 9 01:05:39.999596 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:39.998897 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:40.015686 systemd-logind[1481]: New session 22 of user core. Oct 9 01:05:40.025811 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:05:40.215257 sshd[4275]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:40.221276 systemd[1]: sshd@21-10.0.0.110:22-10.0.0.1:48126.service: Deactivated successfully. Oct 9 01:05:40.223781 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:05:40.231701 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:05:40.240614 systemd-logind[1481]: Removed session 22. Oct 9 01:05:40.927514 kubelet[2669]: E1009 01:05:40.927478 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:45.227268 systemd[1]: Started sshd@22-10.0.0.110:22-10.0.0.1:48128.service - OpenSSH per-connection server daemon (10.0.0.1:48128). Oct 9 01:05:45.259322 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 48128 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:45.260979 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:45.265302 systemd-logind[1481]: New session 23 of user core. Oct 9 01:05:45.277525 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:05:45.384352 sshd[4289]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:45.394350 systemd[1]: sshd@22-10.0.0.110:22-10.0.0.1:48128.service: Deactivated successfully. Oct 9 01:05:45.396187 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:05:45.397573 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:05:45.398984 systemd[1]: Started sshd@23-10.0.0.110:22-10.0.0.1:48130.service - OpenSSH per-connection server daemon (10.0.0.1:48130). Oct 9 01:05:45.400136 systemd-logind[1481]: Removed session 23. Oct 9 01:05:45.430472 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 48130 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:45.431844 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:45.436208 systemd-logind[1481]: New session 24 of user core. Oct 9 01:05:45.445470 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:05:46.767522 containerd[1492]: time="2024-10-09T01:05:46.767459790Z" level=info msg="StopContainer for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" with timeout 30 (s)" Oct 9 01:05:46.775588 containerd[1492]: time="2024-10-09T01:05:46.775196732Z" level=info msg="Stop container \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" with signal terminated" Oct 9 01:05:46.786175 systemd[1]: cri-containerd-4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37.scope: Deactivated successfully. Oct 9 01:05:46.802638 containerd[1492]: time="2024-10-09T01:05:46.802567907Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:05:46.804195 containerd[1492]: time="2024-10-09T01:05:46.803955128Z" level=info msg="StopContainer for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" with timeout 2 (s)" Oct 9 01:05:46.805142 containerd[1492]: time="2024-10-09T01:05:46.804718332Z" level=info msg="Stop container \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" with signal terminated" Oct 9 01:05:46.812951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37-rootfs.mount: Deactivated successfully. Oct 9 01:05:46.815663 systemd-networkd[1399]: lxc_health: Link DOWN Oct 9 01:05:46.815684 systemd-networkd[1399]: lxc_health: Lost carrier Oct 9 01:05:46.828545 containerd[1492]: time="2024-10-09T01:05:46.828466066Z" level=info msg="shim disconnected" id=4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37 namespace=k8s.io Oct 9 01:05:46.828545 containerd[1492]: time="2024-10-09T01:05:46.828534858Z" level=warning msg="cleaning up after shim disconnected" id=4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37 namespace=k8s.io Oct 9 01:05:46.828545 containerd[1492]: time="2024-10-09T01:05:46.828548605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:46.837973 systemd[1]: cri-containerd-71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61.scope: Deactivated successfully. Oct 9 01:05:46.838499 systemd[1]: cri-containerd-71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61.scope: Consumed 7.315s CPU time. Oct 9 01:05:46.849908 containerd[1492]: time="2024-10-09T01:05:46.849860798Z" level=info msg="StopContainer for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" returns successfully" Oct 9 01:05:46.850550 containerd[1492]: time="2024-10-09T01:05:46.850509943Z" level=info msg="StopPodSandbox for \"3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5\"" Oct 9 01:05:46.854730 containerd[1492]: time="2024-10-09T01:05:46.850563245Z" level=info msg="Container to stop \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:05:46.856760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5-shm.mount: Deactivated successfully. Oct 9 01:05:46.863661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61-rootfs.mount: Deactivated successfully. Oct 9 01:05:46.865130 systemd[1]: cri-containerd-3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5.scope: Deactivated successfully. Oct 9 01:05:46.890686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5-rootfs.mount: Deactivated successfully. Oct 9 01:05:46.891667 containerd[1492]: time="2024-10-09T01:05:46.891575397Z" level=info msg="shim disconnected" id=71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61 namespace=k8s.io Oct 9 01:05:46.891667 containerd[1492]: time="2024-10-09T01:05:46.891641083Z" level=warning msg="cleaning up after shim disconnected" id=71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61 namespace=k8s.io Oct 9 01:05:46.891667 containerd[1492]: time="2024-10-09T01:05:46.891652966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:46.895412 containerd[1492]: time="2024-10-09T01:05:46.892538805Z" level=info msg="shim disconnected" id=3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5 namespace=k8s.io Oct 9 01:05:46.895412 containerd[1492]: time="2024-10-09T01:05:46.892609571Z" level=warning msg="cleaning up after shim disconnected" id=3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5 namespace=k8s.io Oct 9 01:05:46.895412 containerd[1492]: time="2024-10-09T01:05:46.892622826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:46.919460 containerd[1492]: time="2024-10-09T01:05:46.919395133Z" level=warning msg="cleanup warnings time=\"2024-10-09T01:05:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 01:05:46.921685 containerd[1492]: time="2024-10-09T01:05:46.921421249Z" level=info msg="TearDown network for sandbox \"3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5\" successfully" Oct 9 01:05:46.921685 containerd[1492]: time="2024-10-09T01:05:46.921447269Z" level=info msg="StopPodSandbox for \"3d26f090cfbf85f6f79304bc952828bc075929460639db3ab3d023825b4e65c5\" returns successfully" Oct 9 01:05:46.925189 containerd[1492]: time="2024-10-09T01:05:46.925134702Z" level=info msg="StopContainer for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" returns successfully" Oct 9 01:05:46.927377 containerd[1492]: time="2024-10-09T01:05:46.926325767Z" level=info msg="StopPodSandbox for \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\"" Oct 9 01:05:46.927377 containerd[1492]: time="2024-10-09T01:05:46.926378448Z" level=info msg="Container to stop \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:05:46.927377 containerd[1492]: time="2024-10-09T01:05:46.926421380Z" level=info msg="Container to stop \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:05:46.927377 containerd[1492]: time="2024-10-09T01:05:46.926434236Z" level=info msg="Container to stop \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:05:46.927377 containerd[1492]: time="2024-10-09T01:05:46.926446509Z" level=info msg="Container to stop \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:05:46.927377 containerd[1492]: time="2024-10-09T01:05:46.926457921Z" level=info msg="Container to stop \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:05:46.935975 systemd[1]: cri-containerd-51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891.scope: Deactivated successfully. Oct 9 01:05:47.005869 containerd[1492]: time="2024-10-09T01:05:47.005798322Z" level=info msg="shim disconnected" id=51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891 namespace=k8s.io Oct 9 01:05:47.006572 containerd[1492]: time="2024-10-09T01:05:47.006388162Z" level=warning msg="cleaning up after shim disconnected" id=51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891 namespace=k8s.io Oct 9 01:05:47.006572 containerd[1492]: time="2024-10-09T01:05:47.006426375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:47.029979 containerd[1492]: time="2024-10-09T01:05:47.029813050Z" level=info msg="TearDown network for sandbox \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" successfully" Oct 9 01:05:47.029979 containerd[1492]: time="2024-10-09T01:05:47.029853598Z" level=info msg="StopPodSandbox for \"51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891\" returns successfully" Oct 9 01:05:47.052906 kubelet[2669]: I1009 01:05:47.050100 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-cilium-config-path\") pod \"20fbf6b2-9b5c-4bd3-8206-7d2875bf0958\" (UID: \"20fbf6b2-9b5c-4bd3-8206-7d2875bf0958\") " Oct 9 01:05:47.052906 kubelet[2669]: I1009 01:05:47.050170 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l75fz\" (UniqueName: \"kubernetes.io/projected/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-kube-api-access-l75fz\") pod \"20fbf6b2-9b5c-4bd3-8206-7d2875bf0958\" (UID: \"20fbf6b2-9b5c-4bd3-8206-7d2875bf0958\") " Oct 9 01:05:47.054733 kubelet[2669]: I1009 01:05:47.054671 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-kube-api-access-l75fz" (OuterVolumeSpecName: "kube-api-access-l75fz") pod "20fbf6b2-9b5c-4bd3-8206-7d2875bf0958" (UID: "20fbf6b2-9b5c-4bd3-8206-7d2875bf0958"). InnerVolumeSpecName "kube-api-access-l75fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:05:47.055225 kubelet[2669]: I1009 01:05:47.055072 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20fbf6b2-9b5c-4bd3-8206-7d2875bf0958" (UID: "20fbf6b2-9b5c-4bd3-8206-7d2875bf0958"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:05:47.151352 kubelet[2669]: I1009 01:05:47.151286 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-run\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151352 kubelet[2669]: I1009 01:05:47.151363 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cni-path\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151549 kubelet[2669]: I1009 01:05:47.151385 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-bpf-maps\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151549 kubelet[2669]: I1009 01:05:47.151407 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-lib-modules\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151549 kubelet[2669]: I1009 01:05:47.151439 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hubble-tls\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151549 kubelet[2669]: I1009 01:05:47.151457 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hostproc\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151549 kubelet[2669]: I1009 01:05:47.151479 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-config-path\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151549 kubelet[2669]: I1009 01:05:47.151480 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cni-path" (OuterVolumeSpecName: "cni-path") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.151694 kubelet[2669]: I1009 01:05:47.151551 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.151694 kubelet[2669]: I1009 01:05:47.151506 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-xtables-lock\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151694 kubelet[2669]: I1009 01:05:47.151476 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.151694 kubelet[2669]: I1009 01:05:47.151605 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hostproc" (OuterVolumeSpecName: "hostproc") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.151694 kubelet[2669]: I1009 01:05:47.151611 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-net\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151822 kubelet[2669]: I1009 01:05:47.151646 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72sdp\" (UniqueName: \"kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-kube-api-access-72sdp\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151822 kubelet[2669]: I1009 01:05:47.151669 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-kernel\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151822 kubelet[2669]: I1009 01:05:47.151688 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-etc-cni-netd\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151822 kubelet[2669]: I1009 01:05:47.151734 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-clustermesh-secrets\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151822 kubelet[2669]: I1009 01:05:47.151756 2669 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-cgroup\") pod \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\" (UID: \"df4c8b2f-2422-4c79-ba86-ee6d1e51aecb\") " Oct 9 01:05:47.151822 kubelet[2669]: I1009 01:05:47.151818 2669 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.151957 kubelet[2669]: I1009 01:05:47.151835 2669 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l75fz\" (UniqueName: \"kubernetes.io/projected/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-kube-api-access-l75fz\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.151957 kubelet[2669]: I1009 01:05:47.151851 2669 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.151957 kubelet[2669]: I1009 01:05:47.151863 2669 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.151957 kubelet[2669]: I1009 01:05:47.151875 2669 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.151957 kubelet[2669]: I1009 01:05:47.151906 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.151957 kubelet[2669]: I1009 01:05:47.151933 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.152363 kubelet[2669]: I1009 01:05:47.152194 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.152363 kubelet[2669]: I1009 01:05:47.152258 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.155636 kubelet[2669]: I1009 01:05:47.155476 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.155636 kubelet[2669]: I1009 01:05:47.155544 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:05:47.155948 kubelet[2669]: I1009 01:05:47.155925 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:05:47.156147 kubelet[2669]: I1009 01:05:47.156123 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:05:47.156216 kubelet[2669]: I1009 01:05:47.156186 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-kube-api-access-72sdp" (OuterVolumeSpecName: "kube-api-access-72sdp") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "kube-api-access-72sdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:05:47.156283 kubelet[2669]: I1009 01:05:47.156246 2669 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" (UID: "df4c8b2f-2422-4c79-ba86-ee6d1e51aecb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:05:47.160417 kubelet[2669]: I1009 01:05:47.160300 2669 scope.go:117] "RemoveContainer" containerID="4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37" Oct 9 01:05:47.169086 containerd[1492]: time="2024-10-09T01:05:47.168749833Z" level=info msg="RemoveContainer for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\"" Oct 9 01:05:47.169137 systemd[1]: Removed slice kubepods-besteffort-pod20fbf6b2_9b5c_4bd3_8206_7d2875bf0958.slice - libcontainer container kubepods-besteffort-pod20fbf6b2_9b5c_4bd3_8206_7d2875bf0958.slice. Oct 9 01:05:47.175118 systemd[1]: Removed slice kubepods-burstable-poddf4c8b2f_2422_4c79_ba86_ee6d1e51aecb.slice - libcontainer container kubepods-burstable-poddf4c8b2f_2422_4c79_ba86_ee6d1e51aecb.slice. Oct 9 01:05:47.175250 systemd[1]: kubepods-burstable-poddf4c8b2f_2422_4c79_ba86_ee6d1e51aecb.slice: Consumed 7.423s CPU time. Oct 9 01:05:47.186509 containerd[1492]: time="2024-10-09T01:05:47.186434927Z" level=info msg="RemoveContainer for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" returns successfully" Oct 9 01:05:47.186923 kubelet[2669]: I1009 01:05:47.186829 2669 scope.go:117] "RemoveContainer" containerID="4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37" Oct 9 01:05:47.187288 containerd[1492]: time="2024-10-09T01:05:47.187224712Z" level=error msg="ContainerStatus for \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\": not found" Oct 9 01:05:47.200650 kubelet[2669]: E1009 01:05:47.200593 2669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\": not found" containerID="4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37" Oct 9 01:05:47.200822 kubelet[2669]: I1009 01:05:47.200643 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37"} err="failed to get container status \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b0278c85a6b8901f14001d574ea1723c61e5488460fba750ecc9bfd78c4ee37\": not found" Oct 9 01:05:47.200822 kubelet[2669]: I1009 01:05:47.200769 2669 scope.go:117] "RemoveContainer" containerID="71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61" Oct 9 01:05:47.202226 containerd[1492]: time="2024-10-09T01:05:47.202182317Z" level=info msg="RemoveContainer for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\"" Oct 9 01:05:47.207328 containerd[1492]: time="2024-10-09T01:05:47.207281463Z" level=info msg="RemoveContainer for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" returns successfully" Oct 9 01:05:47.207597 kubelet[2669]: I1009 01:05:47.207561 2669 scope.go:117] "RemoveContainer" containerID="68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6" Oct 9 01:05:47.209449 containerd[1492]: time="2024-10-09T01:05:47.209417619Z" level=info msg="RemoveContainer for \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\"" Oct 9 01:05:47.214410 containerd[1492]: time="2024-10-09T01:05:47.214050680Z" level=info msg="RemoveContainer for \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\" returns successfully" Oct 9 01:05:47.214538 kubelet[2669]: I1009 01:05:47.214292 2669 scope.go:117] "RemoveContainer" containerID="690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0" Oct 9 01:05:47.215913 containerd[1492]: time="2024-10-09T01:05:47.215875489Z" level=info msg="RemoveContainer for \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\"" Oct 9 01:05:47.234689 containerd[1492]: time="2024-10-09T01:05:47.234553357Z" level=info msg="RemoveContainer for \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\" returns successfully" Oct 9 01:05:47.234976 kubelet[2669]: I1009 01:05:47.234859 2669 scope.go:117] "RemoveContainer" containerID="49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21" Oct 9 01:05:47.237384 containerd[1492]: time="2024-10-09T01:05:47.237016059Z" level=info msg="RemoveContainer for \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\"" Oct 9 01:05:47.245397 containerd[1492]: time="2024-10-09T01:05:47.245292616Z" level=info msg="RemoveContainer for \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\" returns successfully" Oct 9 01:05:47.245781 kubelet[2669]: I1009 01:05:47.245733 2669 scope.go:117] "RemoveContainer" containerID="456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36" Oct 9 01:05:47.247892 containerd[1492]: time="2024-10-09T01:05:47.247656417Z" level=info msg="RemoveContainer for \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\"" Oct 9 01:05:47.252695 kubelet[2669]: I1009 01:05:47.252621 2669 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252695 kubelet[2669]: I1009 01:05:47.252658 2669 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252695 kubelet[2669]: I1009 01:05:47.252677 2669 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-72sdp\" (UniqueName: \"kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-kube-api-access-72sdp\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252695 kubelet[2669]: I1009 01:05:47.252693 2669 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252774 2669 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252790 2669 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252803 2669 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252817 2669 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252831 2669 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252842 2669 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.252999 kubelet[2669]: I1009 01:05:47.252854 2669 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 9 01:05:47.253862 containerd[1492]: time="2024-10-09T01:05:47.253784797Z" level=info msg="RemoveContainer for \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\" returns successfully" Oct 9 01:05:47.254164 kubelet[2669]: I1009 01:05:47.254098 2669 scope.go:117] "RemoveContainer" containerID="71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61" Oct 9 01:05:47.257069 containerd[1492]: time="2024-10-09T01:05:47.256952729Z" level=error msg="ContainerStatus for \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\": not found" Oct 9 01:05:47.257275 kubelet[2669]: E1009 01:05:47.257168 2669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\": not found" containerID="71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61" Oct 9 01:05:47.257275 kubelet[2669]: I1009 01:05:47.257253 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61"} err="failed to get container status \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\": rpc error: code = NotFound desc = an error occurred when try to find container \"71f61d4aa463725924538c9b9abd7d7c8e3cef5eb2168e20346a6d0e0cb1da61\": not found" Oct 9 01:05:47.257397 kubelet[2669]: I1009 01:05:47.257288 2669 scope.go:117] "RemoveContainer" containerID="68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6" Oct 9 01:05:47.258039 containerd[1492]: time="2024-10-09T01:05:47.257957095Z" level=error msg="ContainerStatus for \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\": not found" Oct 9 01:05:47.258373 kubelet[2669]: E1009 01:05:47.258178 2669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\": not found" containerID="68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6" Oct 9 01:05:47.258373 kubelet[2669]: I1009 01:05:47.258216 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6"} err="failed to get container status \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"68a99bb82a3877eed9b42e04450b5536d70c9a9ab93678529ab3dff62efdafc6\": not found" Oct 9 01:05:47.258373 kubelet[2669]: I1009 01:05:47.258244 2669 scope.go:117] "RemoveContainer" containerID="690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0" Oct 9 01:05:47.258502 containerd[1492]: time="2024-10-09T01:05:47.258477423Z" level=error msg="ContainerStatus for \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\": not found" Oct 9 01:05:47.258645 kubelet[2669]: E1009 01:05:47.258603 2669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\": not found" containerID="690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0" Oct 9 01:05:47.258707 kubelet[2669]: I1009 01:05:47.258636 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0"} err="failed to get container status \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"690bb75f57e83b0c526e15c8ba740278b626f259c8d5b59b4e421316cce5f9a0\": not found" Oct 9 01:05:47.258707 kubelet[2669]: I1009 01:05:47.258661 2669 scope.go:117] "RemoveContainer" containerID="49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21" Oct 9 01:05:47.259256 containerd[1492]: time="2024-10-09T01:05:47.259210168Z" level=error msg="ContainerStatus for \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\": not found" Oct 9 01:05:47.259460 kubelet[2669]: E1009 01:05:47.259389 2669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\": not found" containerID="49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21" Oct 9 01:05:47.259460 kubelet[2669]: I1009 01:05:47.259430 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21"} err="failed to get container status \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\": rpc error: code = NotFound desc = an error occurred when try to find container \"49c7879c387ce5e2476d18756331950a80ea2aae79a71ecaaab231b7f68cbd21\": not found" Oct 9 01:05:47.259460 kubelet[2669]: I1009 01:05:47.259453 2669 scope.go:117] "RemoveContainer" containerID="456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36" Oct 9 01:05:47.259938 containerd[1492]: time="2024-10-09T01:05:47.259890451Z" level=error msg="ContainerStatus for \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\": not found" Oct 9 01:05:47.260032 kubelet[2669]: E1009 01:05:47.260010 2669 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\": not found" containerID="456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36" Oct 9 01:05:47.260071 kubelet[2669]: I1009 01:05:47.260037 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36"} err="failed to get container status \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\": rpc error: code = NotFound desc = an error occurred when try to find container \"456322c3da5646cecf1096eb7b0f765a93758908e7d16a16639ed19f66625c36\": not found" Oct 9 01:05:47.775872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891-rootfs.mount: Deactivated successfully. Oct 9 01:05:47.776007 systemd[1]: var-lib-kubelet-pods-20fbf6b2\x2d9b5c\x2d4bd3\x2d8206\x2d7d2875bf0958-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl75fz.mount: Deactivated successfully. Oct 9 01:05:47.776109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51e56bbffdd1bed5578f6971883f92792589c2a00b04fcda1e5789f072240891-shm.mount: Deactivated successfully. Oct 9 01:05:47.776184 systemd[1]: var-lib-kubelet-pods-df4c8b2f\x2d2422\x2d4c79\x2dba86\x2dee6d1e51aecb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72sdp.mount: Deactivated successfully. Oct 9 01:05:47.776267 systemd[1]: var-lib-kubelet-pods-df4c8b2f\x2d2422\x2d4c79\x2dba86\x2dee6d1e51aecb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 01:05:47.776386 systemd[1]: var-lib-kubelet-pods-df4c8b2f\x2d2422\x2d4c79\x2dba86\x2dee6d1e51aecb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 01:05:48.752254 sshd[4303]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:48.770986 systemd[1]: sshd@23-10.0.0.110:22-10.0.0.1:48130.service: Deactivated successfully. Oct 9 01:05:48.773324 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:05:48.776973 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:05:48.782863 systemd[1]: Started sshd@24-10.0.0.110:22-10.0.0.1:34226.service - OpenSSH per-connection server daemon (10.0.0.1:34226). Oct 9 01:05:48.784216 systemd-logind[1481]: Removed session 24. Oct 9 01:05:48.855984 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 34226 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:48.859413 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:48.868041 systemd-logind[1481]: New session 25 of user core. Oct 9 01:05:48.873651 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:05:48.936288 kubelet[2669]: I1009 01:05:48.936230 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fbf6b2-9b5c-4bd3-8206-7d2875bf0958" path="/var/lib/kubelet/pods/20fbf6b2-9b5c-4bd3-8206-7d2875bf0958/volumes" Oct 9 01:05:48.937149 kubelet[2669]: I1009 01:05:48.937089 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" path="/var/lib/kubelet/pods/df4c8b2f-2422-4c79-ba86-ee6d1e51aecb/volumes" Oct 9 01:05:48.987124 kubelet[2669]: E1009 01:05:48.987062 2669 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 01:05:49.454627 sshd[4467]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:49.465979 kubelet[2669]: I1009 01:05:49.465778 2669 topology_manager.go:215] "Topology Admit Handler" podUID="cab35e7c-0056-4654-a0be-14337b3672ea" podNamespace="kube-system" podName="cilium-9flsf" Oct 9 01:05:49.465979 kubelet[2669]: E1009 01:05:49.465833 2669 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" containerName="clean-cilium-state" Oct 9 01:05:49.465979 kubelet[2669]: E1009 01:05:49.465843 2669 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" containerName="mount-cgroup" Oct 9 01:05:49.465979 kubelet[2669]: E1009 01:05:49.465850 2669 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20fbf6b2-9b5c-4bd3-8206-7d2875bf0958" containerName="cilium-operator" Oct 9 01:05:49.465979 kubelet[2669]: E1009 01:05:49.465856 2669 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" containerName="apply-sysctl-overwrites" Oct 9 01:05:49.465979 kubelet[2669]: E1009 01:05:49.465862 2669 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" containerName="mount-bpf-fs" Oct 9 01:05:49.465979 kubelet[2669]: E1009 01:05:49.465868 2669 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" containerName="cilium-agent" Oct 9 01:05:49.467043 kubelet[2669]: I1009 01:05:49.466404 2669 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4c8b2f-2422-4c79-ba86-ee6d1e51aecb" containerName="cilium-agent" Oct 9 01:05:49.467043 kubelet[2669]: I1009 01:05:49.466421 2669 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fbf6b2-9b5c-4bd3-8206-7d2875bf0958" containerName="cilium-operator" Oct 9 01:05:49.470301 systemd[1]: sshd@24-10.0.0.110:22-10.0.0.1:34226.service: Deactivated successfully. Oct 9 01:05:49.475724 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:05:49.478441 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:05:49.484715 systemd[1]: Started sshd@25-10.0.0.110:22-10.0.0.1:34228.service - OpenSSH per-connection server daemon (10.0.0.1:34228). Oct 9 01:05:49.487647 systemd-logind[1481]: Removed session 25. Oct 9 01:05:49.492375 systemd[1]: Created slice kubepods-burstable-podcab35e7c_0056_4654_a0be_14337b3672ea.slice - libcontainer container kubepods-burstable-podcab35e7c_0056_4654_a0be_14337b3672ea.slice. Oct 9 01:05:49.519439 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 34228 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:49.520978 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:49.524768 systemd-logind[1481]: New session 26 of user core. Oct 9 01:05:49.537490 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 01:05:49.571428 kubelet[2669]: I1009 01:05:49.571376 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-etc-cni-netd\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571428 kubelet[2669]: I1009 01:05:49.571417 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-lib-modules\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571428 kubelet[2669]: I1009 01:05:49.571431 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-xtables-lock\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571586 kubelet[2669]: I1009 01:05:49.571447 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cab35e7c-0056-4654-a0be-14337b3672ea-cilium-ipsec-secrets\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571586 kubelet[2669]: I1009 01:05:49.571462 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-cilium-run\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571586 kubelet[2669]: I1009 01:05:49.571479 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-host-proc-sys-kernel\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571586 kubelet[2669]: I1009 01:05:49.571494 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-cni-path\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571586 kubelet[2669]: I1009 01:05:49.571516 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-bpf-maps\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571586 kubelet[2669]: I1009 01:05:49.571530 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-hostproc\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571781 kubelet[2669]: I1009 01:05:49.571550 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cab35e7c-0056-4654-a0be-14337b3672ea-clustermesh-secrets\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571781 kubelet[2669]: I1009 01:05:49.571567 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-cilium-cgroup\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571781 kubelet[2669]: I1009 01:05:49.571584 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnwn\" (UniqueName: \"kubernetes.io/projected/cab35e7c-0056-4654-a0be-14337b3672ea-kube-api-access-9fnwn\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571781 kubelet[2669]: I1009 01:05:49.571599 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cab35e7c-0056-4654-a0be-14337b3672ea-cilium-config-path\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571781 kubelet[2669]: I1009 01:05:49.571612 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cab35e7c-0056-4654-a0be-14337b3672ea-hubble-tls\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.571936 kubelet[2669]: I1009 01:05:49.571667 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cab35e7c-0056-4654-a0be-14337b3672ea-host-proc-sys-net\") pod \"cilium-9flsf\" (UID: \"cab35e7c-0056-4654-a0be-14337b3672ea\") " pod="kube-system/cilium-9flsf" Oct 9 01:05:49.587490 sshd[4480]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:49.606377 systemd[1]: sshd@25-10.0.0.110:22-10.0.0.1:34228.service: Deactivated successfully. Oct 9 01:05:49.608353 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 01:05:49.609896 systemd-logind[1481]: Session 26 logged out. Waiting for processes to exit. Oct 9 01:05:49.611411 systemd[1]: Started sshd@26-10.0.0.110:22-10.0.0.1:34244.service - OpenSSH per-connection server daemon (10.0.0.1:34244). Oct 9 01:05:49.612466 systemd-logind[1481]: Removed session 26. Oct 9 01:05:49.655039 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 34244 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 01:05:49.657198 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:49.663797 systemd-logind[1481]: New session 27 of user core. Oct 9 01:05:49.677954 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 01:05:49.796508 kubelet[2669]: E1009 01:05:49.796373 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:49.798819 containerd[1492]: time="2024-10-09T01:05:49.798141284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9flsf,Uid:cab35e7c-0056-4654-a0be-14337b3672ea,Namespace:kube-system,Attempt:0,}" Oct 9 01:05:49.822420 containerd[1492]: time="2024-10-09T01:05:49.822312743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:49.822420 containerd[1492]: time="2024-10-09T01:05:49.822380963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:49.822420 containerd[1492]: time="2024-10-09T01:05:49.822395431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:49.822584 containerd[1492]: time="2024-10-09T01:05:49.822468731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:49.840663 systemd[1]: Started cri-containerd-15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083.scope - libcontainer container 15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083. Oct 9 01:05:49.868722 containerd[1492]: time="2024-10-09T01:05:49.868655679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9flsf,Uid:cab35e7c-0056-4654-a0be-14337b3672ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\"" Oct 9 01:05:49.869688 kubelet[2669]: E1009 01:05:49.869655 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:49.872052 containerd[1492]: time="2024-10-09T01:05:49.872017767Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 01:05:49.891821 containerd[1492]: time="2024-10-09T01:05:49.891737440Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f\"" Oct 9 01:05:49.892702 containerd[1492]: time="2024-10-09T01:05:49.892648425Z" level=info msg="StartContainer for \"ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f\"" Oct 9 01:05:49.930572 systemd[1]: Started cri-containerd-ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f.scope - libcontainer container ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f. Oct 9 01:05:49.965779 containerd[1492]: time="2024-10-09T01:05:49.965727761Z" level=info msg="StartContainer for \"ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f\" returns successfully" Oct 9 01:05:49.976846 systemd[1]: cri-containerd-ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f.scope: Deactivated successfully. Oct 9 01:05:50.010580 containerd[1492]: time="2024-10-09T01:05:50.010518756Z" level=info msg="shim disconnected" id=ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f namespace=k8s.io Oct 9 01:05:50.010580 containerd[1492]: time="2024-10-09T01:05:50.010572971Z" level=warning msg="cleaning up after shim disconnected" id=ca8e21c137dd6ed3a69f8b4dc051e31c41445b9ab7d5e0a97b643088de24978f namespace=k8s.io Oct 9 01:05:50.010580 containerd[1492]: time="2024-10-09T01:05:50.010582148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:50.179470 kubelet[2669]: E1009 01:05:50.179430 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:50.182293 containerd[1492]: time="2024-10-09T01:05:50.182249264Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 01:05:50.195856 containerd[1492]: time="2024-10-09T01:05:50.195805560Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61\"" Oct 9 01:05:50.196512 containerd[1492]: time="2024-10-09T01:05:50.196373577Z" level=info msg="StartContainer for \"3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61\"" Oct 9 01:05:50.228612 systemd[1]: Started cri-containerd-3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61.scope - libcontainer container 3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61. Oct 9 01:05:50.257042 containerd[1492]: time="2024-10-09T01:05:50.256940541Z" level=info msg="StartContainer for \"3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61\" returns successfully" Oct 9 01:05:50.264014 systemd[1]: cri-containerd-3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61.scope: Deactivated successfully. Oct 9 01:05:50.297892 containerd[1492]: time="2024-10-09T01:05:50.297806127Z" level=info msg="shim disconnected" id=3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61 namespace=k8s.io Oct 9 01:05:50.297892 containerd[1492]: time="2024-10-09T01:05:50.297875910Z" level=warning msg="cleaning up after shim disconnected" id=3091ad5a782111f3ddc891c29acecec535d83b83635273c738b0164532cffe61 namespace=k8s.io Oct 9 01:05:50.297892 containerd[1492]: time="2024-10-09T01:05:50.297889546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:50.859067 kubelet[2669]: I1009 01:05:50.859000 2669 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-09T01:05:50Z","lastTransitionTime":"2024-10-09T01:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 9 01:05:51.182690 kubelet[2669]: E1009 01:05:51.182641 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:51.184322 containerd[1492]: time="2024-10-09T01:05:51.184278628Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 01:05:51.225281 containerd[1492]: time="2024-10-09T01:05:51.225223695Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7\"" Oct 9 01:05:51.225846 containerd[1492]: time="2024-10-09T01:05:51.225816769Z" level=info msg="StartContainer for \"6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7\"" Oct 9 01:05:51.260481 systemd[1]: Started cri-containerd-6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7.scope - libcontainer container 6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7. Oct 9 01:05:51.289472 containerd[1492]: time="2024-10-09T01:05:51.289317845Z" level=info msg="StartContainer for \"6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7\" returns successfully" Oct 9 01:05:51.290995 systemd[1]: cri-containerd-6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7.scope: Deactivated successfully. Oct 9 01:05:51.317573 containerd[1492]: time="2024-10-09T01:05:51.317514845Z" level=info msg="shim disconnected" id=6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7 namespace=k8s.io Oct 9 01:05:51.317573 containerd[1492]: time="2024-10-09T01:05:51.317565392Z" level=warning msg="cleaning up after shim disconnected" id=6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7 namespace=k8s.io Oct 9 01:05:51.317573 containerd[1492]: time="2024-10-09T01:05:51.317573709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:51.680018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fa616f66e2862b82e95e2d86dbf63c0ce7f7e8234ed99c75c06051a9dd9b1d7-rootfs.mount: Deactivated successfully. Oct 9 01:05:51.927358 kubelet[2669]: E1009 01:05:51.927298 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:52.186637 kubelet[2669]: E1009 01:05:52.186601 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:52.190303 containerd[1492]: time="2024-10-09T01:05:52.190263275Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 01:05:52.223276 containerd[1492]: time="2024-10-09T01:05:52.223213998Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1\"" Oct 9 01:05:52.224075 containerd[1492]: time="2024-10-09T01:05:52.224015941Z" level=info msg="StartContainer for \"ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1\"" Oct 9 01:05:52.276929 systemd[1]: Started cri-containerd-ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1.scope - libcontainer container ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1. Oct 9 01:05:52.322395 systemd[1]: cri-containerd-ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1.scope: Deactivated successfully. Oct 9 01:05:52.333801 containerd[1492]: time="2024-10-09T01:05:52.333717333Z" level=info msg="StartContainer for \"ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1\" returns successfully" Oct 9 01:05:52.403117 containerd[1492]: time="2024-10-09T01:05:52.403030981Z" level=info msg="shim disconnected" id=ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1 namespace=k8s.io Oct 9 01:05:52.403117 containerd[1492]: time="2024-10-09T01:05:52.403109922Z" level=warning msg="cleaning up after shim disconnected" id=ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1 namespace=k8s.io Oct 9 01:05:52.403117 containerd[1492]: time="2024-10-09T01:05:52.403121956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:05:52.680394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee76cea64e5ee9f5e242bbdd4afd0030cb5d2ae54db723a03d850ef6d91088a1-rootfs.mount: Deactivated successfully. Oct 9 01:05:53.213900 kubelet[2669]: E1009 01:05:53.212718 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:53.230085 containerd[1492]: time="2024-10-09T01:05:53.223490389Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 01:05:53.340741 containerd[1492]: time="2024-10-09T01:05:53.340678416Z" level=info msg="CreateContainer within sandbox \"15c58d650dfe8720467b99ccfd7fd14a0fc6cf649bd8b65898f1de56a44e9083\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3606721674385431eeeb3836b9eadaa59d8bc6568a3d3790374308ae17c43cd6\"" Oct 9 01:05:53.350867 containerd[1492]: time="2024-10-09T01:05:53.350791493Z" level=info msg="StartContainer for \"3606721674385431eeeb3836b9eadaa59d8bc6568a3d3790374308ae17c43cd6\"" Oct 9 01:05:53.420599 systemd[1]: Started cri-containerd-3606721674385431eeeb3836b9eadaa59d8bc6568a3d3790374308ae17c43cd6.scope - libcontainer container 3606721674385431eeeb3836b9eadaa59d8bc6568a3d3790374308ae17c43cd6. Oct 9 01:05:53.481833 containerd[1492]: time="2024-10-09T01:05:53.481554663Z" level=info msg="StartContainer for \"3606721674385431eeeb3836b9eadaa59d8bc6568a3d3790374308ae17c43cd6\" returns successfully" Oct 9 01:05:53.967370 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 9 01:05:54.222644 kubelet[2669]: E1009 01:05:54.222482 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:55.798974 kubelet[2669]: E1009 01:05:55.798896 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:57.375573 systemd-networkd[1399]: lxc_health: Link UP Oct 9 01:05:57.383587 systemd-networkd[1399]: lxc_health: Gained carrier Oct 9 01:05:57.798815 kubelet[2669]: E1009 01:05:57.798761 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:58.076737 kubelet[2669]: I1009 01:05:58.076580 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9flsf" podStartSLOduration=9.076561698 podStartE2EDuration="9.076561698s" podCreationTimestamp="2024-10-09 01:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:05:54.281412209 +0000 UTC m=+85.451824850" watchObservedRunningTime="2024-10-09 01:05:58.076561698 +0000 UTC m=+89.246974349" Oct 9 01:05:58.229009 kubelet[2669]: E1009 01:05:58.228951 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:59.230879 kubelet[2669]: E1009 01:05:59.230651 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:05:59.240556 systemd-networkd[1399]: lxc_health: Gained IPv6LL Oct 9 01:05:59.927351 kubelet[2669]: E1009 01:05:59.927314 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:04.959182 sshd[4488]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:04.963570 systemd[1]: sshd@26-10.0.0.110:22-10.0.0.1:34244.service: Deactivated successfully. Oct 9 01:06:04.965630 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 01:06:04.966217 systemd-logind[1481]: Session 27 logged out. Waiting for processes to exit. Oct 9 01:06:04.967287 systemd-logind[1481]: Removed session 27.