Mar 10 01:05:35.339354 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:05:35.339442 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:05:35.339481 kernel: BIOS-provided physical RAM map: Mar 10 01:05:35.339488 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 10 01:05:35.339493 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 10 01:05:35.339499 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 10 01:05:35.339508 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 10 01:05:35.339519 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 10 01:05:35.339529 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 10 01:05:35.339538 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 10 01:05:35.339554 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 10 01:05:35.339563 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 10 01:05:35.339604 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 10 01:05:35.339611 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 10 01:05:35.339636 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 10 01:05:35.339642 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 10 01:05:35.339653 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 10 01:05:35.339659 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 10 01:05:35.339666 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 10 01:05:35.339672 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:05:35.339678 kernel: NX (Execute Disable) protection: active Mar 10 01:05:35.339684 kernel: APIC: Static calls initialized Mar 10 01:05:35.339690 kernel: efi: EFI v2.7 by EDK II Mar 10 01:05:35.339696 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 10 01:05:35.339702 kernel: SMBIOS 2.8 present. Mar 10 01:05:35.339708 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 10 01:05:35.339714 kernel: Hypervisor detected: KVM Mar 10 01:05:35.339724 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:05:35.339730 kernel: kvm-clock: using sched offset of 11730384965 cycles Mar 10 01:05:35.339738 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:05:35.339749 kernel: tsc: Detected 2445.426 MHz processor Mar 10 01:05:35.339761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:05:35.339771 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:05:35.339782 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 10 01:05:35.339792 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 10 01:05:35.339803 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:05:35.339819 kernel: Using GB pages for direct mapping Mar 10 01:05:35.339830 kernel: Secure boot disabled Mar 10 01:05:35.339840 kernel: ACPI: Early table checksum verification disabled Mar 10 01:05:35.339851 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 10 01:05:35.339869 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 10 01:05:35.339880 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:05:35.339891 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:05:35.339907 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 10 01:05:35.339952 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:05:35.339967 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:05:35.339977 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:05:35.339987 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:05:35.339999 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 10 01:05:35.340009 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 10 01:05:35.340026 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 10 01:05:35.340037 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 10 01:05:35.340049 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 10 01:05:35.340060 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 10 01:05:35.340072 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 10 01:05:35.340079 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 10 01:05:35.340085 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 10 01:05:35.340092 kernel: No NUMA configuration found Mar 10 01:05:35.340131 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 10 01:05:35.340150 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 10 01:05:35.340219 kernel: Zone ranges: Mar 10 01:05:35.340232 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:05:35.340239 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 10 01:05:35.340245 kernel: Normal empty Mar 10 01:05:35.340252 kernel: Movable zone start for each node Mar 10 01:05:35.340259 kernel: Early memory node ranges Mar 10 01:05:35.340266 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 10 01:05:35.340272 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 10 01:05:35.340279 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 10 01:05:35.340290 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 10 01:05:35.340297 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 10 01:05:35.340304 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 10 01:05:35.340331 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 10 01:05:35.340338 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:05:35.340345 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 10 01:05:35.340351 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 10 01:05:35.340358 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:05:35.340364 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 10 01:05:35.340375 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 10 01:05:35.340382 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 10 01:05:35.340393 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:05:35.340406 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:05:35.340416 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:05:35.340426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:05:35.340438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:05:35.340490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:05:35.340502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:05:35.340519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:05:35.340530 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:05:35.340540 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:05:35.340551 kernel: TSC deadline timer available Mar 10 01:05:35.340562 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 01:05:35.340573 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:05:35.340584 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:05:35.340594 kernel: kvm-guest: setup PV sched yield Mar 10 01:05:35.340605 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 10 01:05:35.340621 kernel: Booting paravirtualized kernel on KVM Mar 10 01:05:35.340632 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:05:35.340643 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:05:35.340654 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 01:05:35.340666 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 01:05:35.340676 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:05:35.340687 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:05:35.340698 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:05:35.340710 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:05:35.340759 kernel: random: crng init done Mar 10 01:05:35.340772 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:05:35.340784 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:05:35.340794 kernel: Fallback order for Node 0: 0 Mar 10 01:05:35.340805 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 10 01:05:35.340816 kernel: Policy zone: DMA32 Mar 10 01:05:35.340826 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:05:35.340838 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 166124K reserved, 0K cma-reserved) Mar 10 01:05:35.340854 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:05:35.340865 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:05:35.340875 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:05:35.340886 kernel: Dynamic Preempt: voluntary Mar 10 01:05:35.340897 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:05:35.340928 kernel: rcu: RCU event tracing is enabled. Mar 10 01:05:35.340946 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:05:35.340958 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:05:35.340969 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:05:35.340981 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:05:35.340992 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:05:35.341005 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:05:35.341022 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:05:35.341036 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:05:35.341047 kernel: Console: colour dummy device 80x25 Mar 10 01:05:35.341059 kernel: printk: console [ttyS0] enabled Mar 10 01:05:35.341104 kernel: ACPI: Core revision 20230628 Mar 10 01:05:35.341123 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:05:35.341135 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:05:35.341149 kernel: x2apic enabled Mar 10 01:05:35.341219 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:05:35.341232 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:05:35.341244 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:05:35.341257 kernel: kvm-guest: setup PV IPIs Mar 10 01:05:35.341270 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:05:35.341282 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 01:05:35.341302 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 10 01:05:35.341314 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:05:35.341327 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:05:35.341339 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:05:35.341353 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:05:35.341366 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:05:35.341379 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:05:35.341393 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:05:35.341405 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:05:35.341425 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:05:35.341437 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:05:35.341484 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:05:35.341520 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:05:35.341534 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:05:35.341548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:05:35.341561 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:05:35.341574 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:05:35.341587 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:05:35.341606 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:05:35.341619 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:05:35.341632 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:05:35.341645 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:05:35.341658 kernel: landlock: Up and running. Mar 10 01:05:35.341671 kernel: SELinux: Initializing. Mar 10 01:05:35.341684 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:05:35.341696 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:05:35.341712 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:05:35.341724 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:05:35.341736 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:05:35.341748 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:05:35.341760 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:05:35.341771 kernel: signal: max sigframe size: 1776 Mar 10 01:05:35.341783 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:05:35.341795 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:05:35.341807 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:05:35.341824 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:05:35.341836 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:05:35.341847 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:05:35.341859 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:05:35.341870 kernel: smpboot: Max logical packages: 1 Mar 10 01:05:35.341882 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 10 01:05:35.341894 kernel: devtmpfs: initialized Mar 10 01:05:35.341906 kernel: x86/mm: Memory block size: 128MB Mar 10 01:05:35.341917 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 10 01:05:35.341934 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 10 01:05:35.341945 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 10 01:05:35.341957 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 10 01:05:35.341969 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 10 01:05:35.341980 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:05:35.341992 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:05:35.342003 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:05:35.342015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:05:35.342026 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:05:35.342043 kernel: audit: type=2000 audit(1773104732.120:1): state=initialized audit_enabled=0 res=1 Mar 10 01:05:35.342055 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:05:35.342066 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:05:35.342078 kernel: cpuidle: using governor menu Mar 10 01:05:35.342090 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:05:35.342101 kernel: dca service started, version 1.12.1 Mar 10 01:05:35.342113 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:05:35.342127 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:05:35.342138 kernel: PCI: Using configuration type 1 for base access Mar 10 01:05:35.342212 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:05:35.342226 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:05:35.342238 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:05:35.342250 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:05:35.342262 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:05:35.342273 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:05:35.342285 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:05:35.342296 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:05:35.342308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:05:35.342327 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:05:35.342338 kernel: ACPI: Interpreter enabled Mar 10 01:05:35.342350 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:05:35.342363 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:05:35.342374 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:05:35.342387 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:05:35.342399 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:05:35.342411 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:05:35.342937 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:05:35.343257 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:05:35.343528 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:05:35.343548 kernel: PCI host bridge to bus 0000:00 Mar 10 01:05:35.343880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:05:35.344066 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:05:35.344321 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:05:35.344558 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:05:35.344742 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:05:35.344936 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 10 01:05:35.345133 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:05:35.345666 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:05:35.345863 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 01:05:35.346043 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 10 01:05:35.346288 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 10 01:05:35.346442 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 10 01:05:35.346680 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 10 01:05:35.346870 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:05:35.347085 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 01:05:35.347326 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 10 01:05:35.347548 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 10 01:05:35.347700 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 10 01:05:35.347911 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:05:35.348065 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 10 01:05:35.348311 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 10 01:05:35.348540 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 10 01:05:35.348847 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:05:35.349013 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 10 01:05:35.349245 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 10 01:05:35.349401 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 10 01:05:35.349598 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 10 01:05:35.349804 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:05:35.349953 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:05:35.350247 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:05:35.350414 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 10 01:05:35.350597 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 10 01:05:35.350779 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:05:35.350957 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 10 01:05:35.350969 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:05:35.350976 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:05:35.350983 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:05:35.350996 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:05:35.351003 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:05:35.351010 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:05:35.351017 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:05:35.351024 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:05:35.351031 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:05:35.351037 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:05:35.351044 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:05:35.351051 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:05:35.351061 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:05:35.351069 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:05:35.351076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:05:35.351083 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:05:35.351090 kernel: iommu: Default domain type: Translated Mar 10 01:05:35.351097 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:05:35.351106 kernel: efivars: Registered efivars operations Mar 10 01:05:35.351119 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:05:35.351133 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:05:35.351149 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 10 01:05:35.351216 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 10 01:05:35.351226 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 10 01:05:35.351233 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 10 01:05:35.351395 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:05:35.351578 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:05:35.351726 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:05:35.351735 kernel: vgaarb: loaded Mar 10 01:05:35.351743 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:05:35.351756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:05:35.351763 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:05:35.351770 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:05:35.351777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:05:35.351785 kernel: pnp: PnP ACPI init Mar 10 01:05:35.352102 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:05:35.352126 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:05:35.352140 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:05:35.352215 kernel: NET: Registered PF_INET protocol family Mar 10 01:05:35.352231 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:05:35.352243 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:05:35.352251 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:05:35.352258 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:05:35.352265 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:05:35.352273 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:05:35.352280 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:05:35.352287 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:05:35.352299 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:05:35.352306 kernel: NET: Registered PF_XDP protocol family Mar 10 01:05:35.352507 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 10 01:05:35.352659 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 10 01:05:35.352796 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:05:35.352931 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:05:35.353064 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:05:35.353295 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:05:35.353506 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:05:35.353650 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 10 01:05:35.353661 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:05:35.353668 kernel: Initialise system trusted keyrings Mar 10 01:05:35.353675 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:05:35.353682 kernel: Key type asymmetric registered Mar 10 01:05:35.353689 kernel: Asymmetric key parser 'x509' registered Mar 10 01:05:35.353696 kernel: hrtimer: interrupt took 8168667 ns Mar 10 01:05:35.353704 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:05:35.353716 kernel: io scheduler mq-deadline registered Mar 10 01:05:35.353723 kernel: io scheduler kyber registered Mar 10 01:05:35.353730 kernel: io scheduler bfq registered Mar 10 01:05:35.353737 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:05:35.353745 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:05:35.353753 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:05:35.353760 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:05:35.353767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:05:35.353775 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:05:35.353785 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:05:35.353792 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:05:35.353800 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:05:35.354139 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:05:35.354154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:05:35.354415 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:05:35.354613 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:05:34 UTC (1773104734) Mar 10 01:05:35.354755 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:05:35.354771 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:05:35.354779 kernel: efifb: probing for efifb Mar 10 01:05:35.354786 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 10 01:05:35.354793 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 10 01:05:35.354801 kernel: efifb: scrolling: redraw Mar 10 01:05:35.354808 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 10 01:05:35.354815 kernel: Console: switching to colour frame buffer device 100x37 Mar 10 01:05:35.354822 kernel: fb0: EFI VGA frame buffer device Mar 10 01:05:35.354829 kernel: pstore: Using crash dump compression: deflate Mar 10 01:05:35.354840 kernel: pstore: Registered efi_pstore as persistent store backend Mar 10 01:05:35.354847 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:05:35.354854 kernel: Segment Routing with IPv6 Mar 10 01:05:35.354861 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:05:35.354868 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:05:35.354875 kernel: Key type dns_resolver registered Mar 10 01:05:35.354906 kernel: IPI shorthand broadcast: enabled Mar 10 01:05:35.354917 kernel: sched_clock: Marking stable (3466029050, 545544741)->(4432424274, -420850483) Mar 10 01:05:35.354924 kernel: registered taskstats version 1 Mar 10 01:05:35.354935 kernel: Loading compiled-in X.509 certificates Mar 10 01:05:35.354943 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:05:35.354950 kernel: Key type .fscrypt registered Mar 10 01:05:35.354957 kernel: Key type fscrypt-provisioning registered Mar 10 01:05:35.354965 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:05:35.354972 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:05:35.354980 kernel: ima: No architecture policies found Mar 10 01:05:35.354987 kernel: clk: Disabling unused clocks Mar 10 01:05:35.354997 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:05:35.355005 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:05:35.355012 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:05:35.355020 kernel: Run /init as init process Mar 10 01:05:35.355027 kernel: with arguments: Mar 10 01:05:35.355034 kernel: /init Mar 10 01:05:35.355042 kernel: with environment: Mar 10 01:05:35.355049 kernel: HOME=/ Mar 10 01:05:35.355056 kernel: TERM=linux Mar 10 01:05:35.355091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:05:35.355107 systemd[1]: Detected virtualization kvm. Mar 10 01:05:35.355123 systemd[1]: Detected architecture x86-64. Mar 10 01:05:35.355136 systemd[1]: Running in initrd. Mar 10 01:05:35.355148 systemd[1]: No hostname configured, using default hostname. Mar 10 01:05:35.355212 systemd[1]: Hostname set to . Mar 10 01:05:35.355229 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:05:35.355248 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:05:35.355260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:05:35.355274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:05:35.355286 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:05:35.355293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:05:35.355302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:05:35.355313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:05:35.355322 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:05:35.355330 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:05:35.355338 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:05:35.355346 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:05:35.355354 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:05:35.355365 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:05:35.355372 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:05:35.355380 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:05:35.355388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:05:35.355395 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:05:35.355403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:05:35.355411 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:05:35.355419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:05:35.355426 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:05:35.355437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:05:35.355474 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:05:35.355483 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:05:35.355490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:05:35.355498 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:05:35.355506 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:05:35.355513 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:05:35.355547 systemd-journald[194]: Collecting audit messages is disabled. Mar 10 01:05:35.355570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:05:35.355578 systemd-journald[194]: Journal started Mar 10 01:05:35.355597 systemd-journald[194]: Runtime Journal (/run/log/journal/b1838cd037fe4aaf87e23c65a0d7640b) is 6.0M, max 48.3M, 42.2M free. Mar 10 01:05:35.394828 systemd-modules-load[195]: Inserted module 'overlay' Mar 10 01:05:35.407276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:35.416470 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:05:35.418727 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:05:35.428422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:05:35.434643 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:05:35.447310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:05:35.452258 kernel: Bridge firewalling registered Mar 10 01:05:35.452225 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 10 01:05:35.460552 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:05:35.462080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:05:35.480724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:05:35.484661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:35.490783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:05:35.502661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:05:35.503321 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:05:35.528308 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:05:35.546933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:05:35.552346 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:05:35.563232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:05:35.570007 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:05:35.580744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:05:35.585621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:05:35.607561 dracut-cmdline[230]: dracut-dracut-053 Mar 10 01:05:35.612405 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:05:35.648222 systemd-resolved[231]: Positive Trust Anchors: Mar 10 01:05:35.648384 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:05:35.648435 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:05:35.653756 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 10 01:05:35.657685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:05:35.675708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:05:35.763316 kernel: SCSI subsystem initialized Mar 10 01:05:35.773250 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:05:35.787291 kernel: iscsi: registered transport (tcp) Mar 10 01:05:35.810870 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:05:35.810954 kernel: QLogic iSCSI HBA Driver Mar 10 01:05:35.878208 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:05:35.898408 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:05:35.939934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:05:35.940016 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:05:35.942814 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:05:35.994355 kernel: raid6: avx2x4 gen() 21218 MB/s Mar 10 01:05:36.013313 kernel: raid6: avx2x2 gen() 20969 MB/s Mar 10 01:05:36.033849 kernel: raid6: avx2x1 gen() 11901 MB/s Mar 10 01:05:36.033931 kernel: raid6: using algorithm avx2x4 gen() 21218 MB/s Mar 10 01:05:36.054973 kernel: raid6: .... xor() 4354 MB/s, rmw enabled Mar 10 01:05:36.055035 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:05:36.078256 kernel: xor: automatically using best checksumming function avx Mar 10 01:05:36.287570 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:05:36.313777 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:05:36.335494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:05:36.367819 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 10 01:05:36.378092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:05:36.393560 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:05:36.412055 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 10 01:05:36.459910 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:05:36.473570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:05:36.575305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:05:36.589864 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:05:36.617286 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:05:36.618339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:05:36.632094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:05:36.644392 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:05:36.662728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:05:36.676256 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:05:36.706311 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:05:36.724658 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:05:36.724730 kernel: GPT:9289727 != 19775487 Mar 10 01:05:36.724742 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:05:36.724753 kernel: GPT:9289727 != 19775487 Mar 10 01:05:36.724763 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:05:36.724774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:05:36.724089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:05:36.741943 kernel: libata version 3.00 loaded. Mar 10 01:05:36.742435 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:05:36.748610 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:05:36.748997 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:05:36.752582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:05:36.752862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:05:36.775841 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:05:36.776375 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:05:36.776775 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 01:05:36.780525 kernel: AES CTR mode by8 optimization enabled Mar 10 01:05:36.780450 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:05:36.795505 kernel: scsi host0: ahci Mar 10 01:05:36.797728 kernel: scsi host1: ahci Mar 10 01:05:36.801614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:05:36.845649 kernel: scsi host2: ahci Mar 10 01:05:36.845891 kernel: scsi host3: ahci Mar 10 01:05:36.846082 kernel: scsi host4: ahci Mar 10 01:05:36.846381 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Mar 10 01:05:36.846395 kernel: scsi host5: ahci Mar 10 01:05:36.846622 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 10 01:05:36.846634 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 10 01:05:36.846645 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 10 01:05:36.846655 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 10 01:05:36.846665 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 10 01:05:36.846684 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 10 01:05:36.846694 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 10 01:05:36.801914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:36.837912 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:36.863696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:36.894414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:05:36.908154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:36.926730 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:05:36.938038 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:05:36.945393 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:05:36.963736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:05:36.979350 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:05:36.982557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:05:36.982625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:36.988658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:36.995263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:37.016737 disk-uuid[557]: Primary Header is updated. Mar 10 01:05:37.016737 disk-uuid[557]: Secondary Entries is updated. Mar 10 01:05:37.016737 disk-uuid[557]: Secondary Header is updated. Mar 10 01:05:37.027918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:05:37.027974 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:05:37.028403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:37.052555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:05:37.098800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:05:37.153539 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:05:37.161254 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:05:37.161299 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:05:37.164265 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:05:37.169232 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:05:37.171550 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:05:37.176042 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:05:37.176069 kernel: ata3.00: applying bridge limits Mar 10 01:05:37.176080 kernel: ata3.00: configured for UDMA/100 Mar 10 01:05:37.181209 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:05:37.238136 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:05:37.238543 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:05:37.260208 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:05:38.059755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:05:38.061566 disk-uuid[559]: The operation has completed successfully. Mar 10 01:05:38.134610 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:05:38.134796 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:05:38.182342 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:05:38.196007 sh[599]: Success Mar 10 01:05:38.221224 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 01:05:38.281573 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:05:38.315582 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:05:38.321110 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:05:38.341802 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:05:38.341858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:05:38.341880 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:05:38.346981 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:05:38.347023 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:05:38.357788 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:05:38.358617 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:05:38.385467 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:05:38.392538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:05:38.429302 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:05:38.429374 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:05:38.429395 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:05:38.439228 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:05:38.453859 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:05:38.460542 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:05:38.468837 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:05:38.480384 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:05:38.627032 ignition[703]: Ignition 2.19.0 Mar 10 01:05:38.627044 ignition[703]: Stage: fetch-offline Mar 10 01:05:38.627086 ignition[703]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:05:38.627098 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:05:38.627271 ignition[703]: parsed url from cmdline: "" Mar 10 01:05:38.627276 ignition[703]: no config URL provided Mar 10 01:05:38.627283 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:05:38.627294 ignition[703]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:05:38.647759 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:05:38.627371 ignition[703]: op(1): [started] loading QEMU firmware config module Mar 10 01:05:38.627377 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:05:38.639054 ignition[703]: op(1): [finished] loading QEMU firmware config module Mar 10 01:05:38.666538 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:05:38.697752 systemd-networkd[788]: lo: Link UP Mar 10 01:05:38.697779 systemd-networkd[788]: lo: Gained carrier Mar 10 01:05:38.702896 systemd-networkd[788]: Enumeration completed Mar 10 01:05:38.703389 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:05:38.705297 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:05:38.705302 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:05:38.707225 systemd-networkd[788]: eth0: Link UP Mar 10 01:05:38.707230 systemd-networkd[788]: eth0: Gained carrier Mar 10 01:05:38.707240 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:05:38.708672 systemd[1]: Reached target network.target - Network. Mar 10 01:05:38.727241 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:05:38.893138 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.112 Mar 10 01:05:38.893254 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Mar 10 01:05:38.911220 ignition[703]: parsing config with SHA512: 6cb38a9213899986875405ce7a9f858cd0f81d82d1adf898c2366c5cc9da60e4418883aa2911bb5ead8606728be03a736861f44fdfd4d8ba4849149b7f6f3666 Mar 10 01:05:38.915647 unknown[703]: fetched base config from "system" Mar 10 01:05:38.915685 unknown[703]: fetched user config from "qemu" Mar 10 01:05:38.916140 ignition[703]: fetch-offline: fetch-offline passed Mar 10 01:05:38.918523 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:05:38.916319 ignition[703]: Ignition finished successfully Mar 10 01:05:38.923823 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:05:38.944430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:05:38.978901 ignition[792]: Ignition 2.19.0 Mar 10 01:05:38.978927 ignition[792]: Stage: kargs Mar 10 01:05:38.979100 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:05:38.979113 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:05:38.979886 ignition[792]: kargs: kargs passed Mar 10 01:05:38.979936 ignition[792]: Ignition finished successfully Mar 10 01:05:38.994969 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:05:39.013446 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:05:39.030850 ignition[800]: Ignition 2.19.0 Mar 10 01:05:39.030878 ignition[800]: Stage: disks Mar 10 01:05:39.031100 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:05:39.031114 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:05:39.032076 ignition[800]: disks: disks passed Mar 10 01:05:39.032129 ignition[800]: Ignition finished successfully Mar 10 01:05:39.049636 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:05:39.052854 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:05:39.059477 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:05:39.068422 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:05:39.076899 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:05:39.084817 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:05:39.112424 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:05:39.137018 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 01:05:39.144769 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:05:39.166557 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:05:39.292283 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:05:39.293019 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:05:39.300943 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:05:39.324403 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:05:39.334112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:05:39.350524 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 10 01:05:39.350548 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:05:39.350559 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:05:39.350569 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:05:39.356260 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:05:39.356633 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:05:39.356705 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:05:39.356731 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:05:39.370849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:05:39.381765 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:05:39.404531 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:05:39.454431 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:05:39.466828 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:05:39.473027 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:05:39.484898 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:05:39.620398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:05:39.643406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:05:39.653304 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:05:39.665414 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:05:39.671978 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:05:39.691894 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:05:39.711460 ignition[934]: INFO : Ignition 2.19.0 Mar 10 01:05:39.711460 ignition[934]: INFO : Stage: mount Mar 10 01:05:39.715592 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:05:39.715592 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:05:39.715592 ignition[934]: INFO : mount: mount passed Mar 10 01:05:39.715592 ignition[934]: INFO : Ignition finished successfully Mar 10 01:05:39.727726 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:05:39.740357 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:05:39.750636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:05:39.768240 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (947) Mar 10 01:05:39.774069 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:05:39.774111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:05:39.774128 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:05:39.783215 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:05:39.784696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:05:39.823259 ignition[964]: INFO : Ignition 2.19.0 Mar 10 01:05:39.823259 ignition[964]: INFO : Stage: files Mar 10 01:05:39.828422 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:05:39.828422 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:05:39.828422 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:05:39.828422 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:05:39.828422 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:05:39.848258 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:05:39.848258 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:05:39.848258 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:05:39.848258 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:05:39.848258 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:05:39.841352 unknown[964]: wrote ssh authorized keys file for user: core Mar 10 01:05:39.930771 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:05:40.021567 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:05:40.027319 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:05:40.027319 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 10 01:05:40.104043 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 10 01:05:40.293042 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:05:40.293042 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:05:40.306088 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:05:40.306088 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:05:40.319259 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:05:40.323846 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:05:40.326414 systemd-networkd[788]: eth0: Gained IPv6LL Mar 10 01:05:40.331011 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:05:40.331011 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:05:40.341906 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:05:40.347292 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:05:40.352537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:05:40.357813 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:05:40.365954 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:05:40.373341 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:05:40.379461 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 10 01:05:40.601821 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 10 01:05:41.915699 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:05:41.915699 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 10 01:05:41.925877 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:05:41.931431 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:05:41.931431 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 10 01:05:41.931431 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 10 01:05:41.943840 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:05:41.949496 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:05:41.949496 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 10 01:05:41.949496 ignition[964]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:05:42.111047 ignition[964]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:05:42.126412 ignition[964]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:05:42.139492 ignition[964]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:05:42.139492 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:05:42.139492 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:05:42.139492 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:05:42.139492 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:05:42.139492 ignition[964]: INFO : files: files passed Mar 10 01:05:42.139492 ignition[964]: INFO : Ignition finished successfully Mar 10 01:05:42.130117 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:05:42.178902 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:05:42.233108 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:05:42.254146 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:05:42.273495 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:05:42.254383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:05:42.289941 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:05:42.289941 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:05:42.272579 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:05:42.307993 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:05:42.277997 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:05:42.314647 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:05:42.359030 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:05:42.359338 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:05:42.367092 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:05:42.373894 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:05:42.377008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:05:42.378470 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:05:42.405668 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:05:42.412343 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:05:42.432467 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:05:42.436134 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:05:42.442859 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:05:42.449770 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:05:42.450022 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:05:42.457831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:05:42.462765 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:05:42.468838 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:05:42.474936 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:05:42.481351 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:05:42.487780 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:05:42.494363 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:05:42.501655 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:05:42.507685 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:05:42.514124 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:05:42.519612 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:05:42.519851 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:05:42.526962 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:05:42.531459 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:05:42.538735 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:05:42.538974 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:05:42.546501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:05:42.546757 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:05:42.554126 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:05:42.554714 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:05:42.561717 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:05:42.567276 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:05:42.567663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:05:42.574896 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:05:42.580491 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:05:42.586427 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:05:42.586577 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:05:42.592348 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:05:42.592491 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:05:42.646152 ignition[1019]: INFO : Ignition 2.19.0 Mar 10 01:05:42.646152 ignition[1019]: INFO : Stage: umount Mar 10 01:05:42.599690 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:05:42.663955 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:05:42.663955 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:05:42.663955 ignition[1019]: INFO : umount: umount passed Mar 10 01:05:42.663955 ignition[1019]: INFO : Ignition finished successfully Mar 10 01:05:42.599815 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:05:42.607342 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:05:42.607482 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:05:42.627412 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:05:42.632666 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:05:42.632846 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:05:42.641489 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:05:42.646129 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:05:42.646501 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:05:42.650133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:05:42.650573 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:05:42.659458 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:05:42.659659 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:05:42.666029 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:05:42.666279 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:05:42.674101 systemd[1]: Stopped target network.target - Network. Mar 10 01:05:42.678516 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:05:42.678643 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:05:42.684106 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:05:42.684276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:05:42.690745 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:05:42.690822 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:05:42.694816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:05:42.694893 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:05:42.702011 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:05:42.707740 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:05:42.715309 systemd-networkd[788]: eth0: DHCPv6 lease lost Mar 10 01:05:42.716125 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:05:42.717120 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:05:42.717306 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:05:42.721776 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:05:42.721927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:05:42.728883 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:05:42.729049 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:05:42.738080 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:05:42.738132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:05:42.745062 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:05:42.745125 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:05:42.763386 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:05:42.769648 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:05:42.769711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:05:42.775595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:05:42.775651 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:05:42.781351 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:05:42.781405 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:05:42.784447 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:05:42.784500 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:05:42.790652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:05:42.813452 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:05:42.813729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:05:42.818799 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:05:42.818943 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:05:42.825074 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:05:42.825149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:05:42.956625 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 10 01:05:42.829293 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:05:42.829516 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:05:42.829799 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:05:42.829915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:05:42.830808 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:05:42.830865 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:05:42.831651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:05:42.831703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:05:42.852390 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:05:42.857298 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:05:42.857364 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:05:42.863793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:05:42.863849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:42.870040 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:05:42.870215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:05:42.876267 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:05:42.901484 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:05:42.913239 systemd[1]: Switching root. Mar 10 01:05:43.032760 systemd-journald[194]: Journal stopped Mar 10 01:05:44.705962 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:05:44.706079 kernel: SELinux: policy capability open_perms=1 Mar 10 01:05:44.706104 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:05:44.706136 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:05:44.706217 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:05:44.706260 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:05:44.706281 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:05:44.706301 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:05:44.706320 kernel: audit: type=1403 audit(1773104743.189:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:05:44.706350 systemd[1]: Successfully loaded SELinux policy in 68.063ms. Mar 10 01:05:44.706389 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.861ms. Mar 10 01:05:44.706412 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:05:44.706440 systemd[1]: Detected virtualization kvm. Mar 10 01:05:44.706462 systemd[1]: Detected architecture x86-64. Mar 10 01:05:44.706492 systemd[1]: Detected first boot. Mar 10 01:05:44.706513 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:05:44.706534 zram_generator::config[1062]: No configuration found. Mar 10 01:05:44.706601 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:05:44.706625 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:05:44.706645 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:05:44.706675 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:05:44.706699 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:05:44.706720 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:05:44.706740 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:05:44.706762 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:05:44.706783 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:05:44.706803 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:05:44.706824 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:05:44.706844 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:05:44.706870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:05:44.706892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:05:44.706912 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:05:44.706933 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:05:44.706953 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:05:44.706974 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:05:44.706995 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:05:44.707016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:05:44.707036 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:05:44.707062 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:05:44.707084 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:05:44.707104 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:05:44.707128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:05:44.707148 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:05:44.707231 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:05:44.707255 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:05:44.707277 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:05:44.707303 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:05:44.707324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:05:44.707343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:05:44.707365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:05:44.707385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:05:44.707406 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:05:44.707426 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:05:44.707445 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:05:44.707465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:05:44.707492 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:05:44.707513 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:05:44.707533 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:05:44.707598 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:05:44.707621 systemd[1]: Reached target machines.target - Containers. Mar 10 01:05:44.707640 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:05:44.707662 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:05:44.707684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:05:44.707710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:05:44.707730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:05:44.707751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:05:44.707772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:05:44.707791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:05:44.707811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:05:44.707833 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:05:44.707852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:05:44.707886 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:05:44.707905 kernel: fuse: init (API version 7.39) Mar 10 01:05:44.707925 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:05:44.707945 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:05:44.707966 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:05:44.707986 kernel: loop: module loaded Mar 10 01:05:44.708004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:05:44.708023 kernel: ACPI: bus type drm_connector registered Mar 10 01:05:44.708042 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:05:44.708065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:05:44.708114 systemd-journald[1146]: Collecting audit messages is disabled. Mar 10 01:05:44.708148 systemd-journald[1146]: Journal started Mar 10 01:05:44.708313 systemd-journald[1146]: Runtime Journal (/run/log/journal/b1838cd037fe4aaf87e23c65a0d7640b) is 6.0M, max 48.3M, 42.2M free. Mar 10 01:05:44.118683 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:05:44.145968 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:05:44.146738 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:05:44.147121 systemd[1]: systemd-journald.service: Consumed 1.638s CPU time. Mar 10 01:05:44.721281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:05:44.727362 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:05:44.727435 systemd[1]: Stopped verity-setup.service. Mar 10 01:05:44.739343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:05:44.747701 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:05:44.749999 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:05:44.755674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:05:44.761021 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:05:44.766300 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:05:44.775067 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:05:44.780323 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:05:44.789072 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:05:44.802720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:05:44.827355 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:05:44.827781 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:05:44.836970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:05:44.837422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:05:44.849346 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:05:44.849750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:05:44.858000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:05:44.858669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:05:44.871595 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:05:44.872241 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:05:44.881126 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:05:44.881611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:05:44.885915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:05:44.890384 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:05:44.895326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:05:44.924816 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:05:44.943460 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:05:44.949743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:05:44.954472 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:05:44.954666 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:05:44.960059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:05:44.966831 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:05:44.974028 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:05:44.978099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:05:44.984351 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:05:44.992354 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:05:44.997878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:05:45.019399 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:05:45.030696 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:05:45.033713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:05:45.064545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:05:45.075700 systemd-journald[1146]: Time spent on flushing to /var/log/journal/b1838cd037fe4aaf87e23c65a0d7640b is 26.154ms for 992 entries. Mar 10 01:05:45.075700 systemd-journald[1146]: System Journal (/var/log/journal/b1838cd037fe4aaf87e23c65a0d7640b) is 8.0M, max 195.6M, 187.6M free. Mar 10 01:05:45.147702 systemd-journald[1146]: Received client request to flush runtime journal. Mar 10 01:05:45.148922 kernel: loop0: detected capacity change from 0 to 219192 Mar 10 01:05:45.076446 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:05:45.088410 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:05:45.093735 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:05:45.100539 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:05:45.110360 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:05:45.126416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:05:45.137681 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:05:45.150599 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:05:45.158285 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:05:45.164909 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:05:45.224265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:05:45.219428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:05:45.230333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:05:45.231340 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:05:45.240444 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 10 01:05:45.258360 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:05:45.263875 kernel: loop1: detected capacity change from 0 to 140768 Mar 10 01:05:45.280730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:05:45.326246 kernel: loop2: detected capacity change from 0 to 142488 Mar 10 01:05:45.378377 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 10 01:05:45.378410 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 10 01:05:45.402967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:05:45.442230 kernel: loop3: detected capacity change from 0 to 219192 Mar 10 01:05:45.466231 kernel: loop4: detected capacity change from 0 to 140768 Mar 10 01:05:45.493215 kernel: loop5: detected capacity change from 0 to 142488 Mar 10 01:05:45.526952 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:05:45.528116 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 10 01:05:45.533486 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:05:45.533503 systemd[1]: Reloading... Mar 10 01:05:45.720298 zram_generator::config[1225]: No configuration found. Mar 10 01:05:45.985459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:05:46.066349 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:05:46.084084 systemd[1]: Reloading finished in 549 ms. Mar 10 01:05:46.136974 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:05:46.141615 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:05:46.222847 systemd[1]: Starting ensure-sysext.service... Mar 10 01:05:46.227711 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:05:46.244467 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:05:46.244508 systemd[1]: Reloading... Mar 10 01:05:46.280557 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:05:46.281330 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:05:46.282734 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:05:46.285625 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 10 01:05:46.291482 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 10 01:05:46.314117 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:05:46.314248 systemd-tmpfiles[1264]: Skipping /boot Mar 10 01:05:46.351391 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:05:46.351416 systemd-tmpfiles[1264]: Skipping /boot Mar 10 01:05:46.408378 zram_generator::config[1291]: No configuration found. Mar 10 01:05:46.548493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:05:46.618322 systemd[1]: Reloading finished in 373 ms. Mar 10 01:05:46.646778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:05:46.664936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:05:46.682436 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:05:46.688455 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:05:46.695001 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:05:46.705665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:05:46.713781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:05:46.730004 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:05:46.739679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:05:46.740299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:05:46.769011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:05:46.784754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:05:46.792875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:05:46.794068 augenrules[1353]: No rules Mar 10 01:05:46.797660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:05:46.797855 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 10 01:05:46.803539 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:05:46.808604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:05:46.810102 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:05:46.815081 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:05:46.821425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:05:46.822110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:05:46.827507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:05:46.828258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:05:46.835394 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:05:46.835782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:05:46.855701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:05:46.864555 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:05:46.870381 systemd[1]: Finished ensure-sysext.service. Mar 10 01:05:46.890048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:05:46.890324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:05:46.899760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:05:46.910432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:05:46.917510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:05:46.926950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:05:46.931401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:05:46.934713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:05:46.945121 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:05:46.967087 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:05:46.971861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:05:46.972564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:05:46.978872 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:05:46.984658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:05:46.985468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:05:46.990772 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:05:46.991457 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:05:46.996657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:05:46.996865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:05:47.003899 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:05:47.004307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:05:47.038899 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:05:47.040517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:05:47.075487 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:05:47.075561 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:05:47.105766 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:05:47.121300 systemd-resolved[1335]: Positive Trust Anchors: Mar 10 01:05:47.121360 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:05:47.121414 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:05:47.128889 systemd-resolved[1335]: Defaulting to hostname 'linux'. Mar 10 01:05:47.133922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:05:47.143906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:05:47.144324 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1379) Mar 10 01:05:47.280431 systemd-networkd[1394]: lo: Link UP Mar 10 01:05:47.280476 systemd-networkd[1394]: lo: Gained carrier Mar 10 01:05:47.285032 systemd-networkd[1394]: Enumeration completed Mar 10 01:05:47.285350 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:05:47.293447 systemd[1]: Reached target network.target - Network. Mar 10 01:05:47.296856 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:05:47.296896 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:05:47.298904 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:05:47.299031 systemd-networkd[1394]: eth0: Link UP Mar 10 01:05:47.299038 systemd-networkd[1394]: eth0: Gained carrier Mar 10 01:05:47.299058 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:05:47.311864 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:05:47.313286 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:05:47.315422 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Mar 10 01:05:47.323894 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:05:48.844009 systemd-resolved[1335]: Clock change detected. Flushing caches. Mar 10 01:05:48.844059 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:05:48.844185 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2026-03-10 01:05:48.843898 UTC. Mar 10 01:05:48.848801 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 10 01:05:48.853730 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:05:48.868722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:05:48.875712 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:05:48.887982 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:05:48.921448 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 10 01:05:48.932920 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:05:48.933313 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:05:48.933558 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:05:48.943909 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 10 01:05:48.945135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:48.949889 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:05:49.070917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:05:49.071314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:49.092163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:05:49.125781 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:05:49.222646 kernel: kvm_amd: TSC scaling supported Mar 10 01:05:49.222826 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:05:49.222856 kernel: kvm_amd: Nested Paging enabled Mar 10 01:05:49.224584 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:05:49.226925 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:05:49.281344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:05:49.293791 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:05:49.409991 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:05:49.447534 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:05:49.457615 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:05:49.493529 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:05:49.497647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:05:49.501168 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:05:49.504474 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:05:49.508062 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:05:49.511980 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:05:49.515275 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:05:49.520285 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:05:49.531017 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:05:49.531458 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:05:49.542386 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:05:49.661884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:05:49.758900 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:05:49.769377 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:05:49.776279 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:05:49.782265 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:05:49.785471 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:05:49.788828 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:05:49.791578 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:05:49.791609 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:05:49.792958 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:05:49.793400 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:05:49.798898 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:05:49.806033 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:05:49.817512 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:05:49.824083 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:05:49.826933 jq[1439]: false Mar 10 01:05:49.827928 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:05:49.833910 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:05:49.843036 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:05:49.848122 dbus-daemon[1438]: [system] SELinux support is enabled Mar 10 01:05:49.850804 extend-filesystems[1440]: Found loop3 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found loop4 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found loop5 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found sr0 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda1 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda2 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda3 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found usr Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda4 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda6 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda7 Mar 10 01:05:49.850804 extend-filesystems[1440]: Found vda9 Mar 10 01:05:49.850804 extend-filesystems[1440]: Checking size of /dev/vda9 Mar 10 01:05:50.072762 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:05:50.072820 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1379) Mar 10 01:05:50.072849 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:05:49.861200 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:05:50.073134 extend-filesystems[1440]: Resized partition /dev/vda9 Mar 10 01:05:49.885969 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:05:50.077309 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:05:50.077309 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:05:50.077309 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:05:50.077309 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:05:49.904807 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:05:50.097306 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Mar 10 01:05:49.984300 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:05:49.992178 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:05:50.100951 update_engine[1461]: I20260310 01:05:50.045307 1461 main.cc:92] Flatcar Update Engine starting Mar 10 01:05:50.100951 update_engine[1461]: I20260310 01:05:50.047864 1461 update_check_scheduler.cc:74] Next update check in 6m13s Mar 10 01:05:50.030645 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:05:50.101659 jq[1462]: true Mar 10 01:05:50.054750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:05:50.101226 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:05:50.116505 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:05:50.116948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:05:50.117534 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:05:50.133464 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:05:50.143868 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:05:50.144193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:05:50.157336 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:05:50.157584 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:05:50.191206 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Mar 10 01:05:50.191244 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:05:50.199613 systemd-logind[1459]: New seat seat0. Mar 10 01:05:50.202361 jq[1467]: true Mar 10 01:05:50.231494 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:05:50.240800 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:05:50.267378 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 01:05:50.270001 tar[1465]: linux-amd64/LICENSE Mar 10 01:05:50.270500 tar[1465]: linux-amd64/helm Mar 10 01:05:50.279285 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:05:50.289315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:05:50.289626 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:05:50.297779 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:05:50.298275 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:05:50.314965 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:05:50.353900 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:05:50.354818 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:05:50.363612 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:05:50.476780 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:05:50.496348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:05:50.508207 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:05:50.554979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:05:50.625299 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:05:50.630839 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:58282.service - OpenSSH per-connection server daemon (10.0.0.1:58282). Mar 10 01:05:50.651464 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:05:50.651970 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:05:50.663567 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:05:50.768188 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:05:50.927423 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:05:50.927560 systemd-networkd[1394]: eth0: Gained IPv6LL Mar 10 01:05:50.941196 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:05:50.946535 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:05:50.951987 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:05:50.962312 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:05:50.979183 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:05:50.987221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:05:50.993089 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:05:51.041909 sshd[1512]: Accepted publickey for core from 10.0.0.1 port 58282 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:05:51.012726 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:51.092961 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:05:51.104163 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:05:51.165007 systemd-logind[1459]: New session 1 of user core. Mar 10 01:05:51.182066 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:05:51.182373 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:05:51.187259 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:05:51.190817 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:05:51.261461 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:05:51.285872 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:05:51.563399 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:05:51.805788 containerd[1469]: time="2026-03-10T01:05:51.805021607Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:05:51.853912 systemd[1545]: Queued start job for default target default.target. Mar 10 01:05:51.867464 systemd[1545]: Created slice app.slice - User Application Slice. Mar 10 01:05:51.867516 systemd[1545]: Reached target paths.target - Paths. Mar 10 01:05:51.867531 systemd[1545]: Reached target timers.target - Timers. Mar 10 01:05:51.869739 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:05:51.991285 containerd[1469]: time="2026-03-10T01:05:51.989906771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:51.995774 containerd[1469]: time="2026-03-10T01:05:51.995645282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:05:51.995977 containerd[1469]: time="2026-03-10T01:05:51.995952085Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:05:51.996284 containerd[1469]: time="2026-03-10T01:05:51.996250562Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:05:51.996835 containerd[1469]: time="2026-03-10T01:05:51.996806991Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:05:51.996935 containerd[1469]: time="2026-03-10T01:05:51.996912187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:51.997211 containerd[1469]: time="2026-03-10T01:05:51.997181780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:05:51.997304 containerd[1469]: time="2026-03-10T01:05:51.997280735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:51.998150 containerd[1469]: time="2026-03-10T01:05:51.998080227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:05:51.998279 containerd[1469]: time="2026-03-10T01:05:51.998256005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:51.998533 containerd[1469]: time="2026-03-10T01:05:51.998435370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:05:51.998616 containerd[1469]: time="2026-03-10T01:05:51.998594958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:51.998939 containerd[1469]: time="2026-03-10T01:05:51.998912791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:51.999990 containerd[1469]: time="2026-03-10T01:05:51.999962802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:05:52.000994 containerd[1469]: time="2026-03-10T01:05:52.000963289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:05:52.001077 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:05:52.001300 systemd[1545]: Reached target sockets.target - Sockets. Mar 10 01:05:52.001341 systemd[1545]: Reached target basic.target - Basic System. Mar 10 01:05:52.001391 systemd[1545]: Reached target default.target - Main User Target. Mar 10 01:05:52.001434 systemd[1545]: Startup finished in 370ms. Mar 10 01:05:52.001780 containerd[1469]: time="2026-03-10T01:05:52.001753193Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:05:52.002501 containerd[1469]: time="2026-03-10T01:05:52.002473908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:05:52.002542 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:05:52.003757 containerd[1469]: time="2026-03-10T01:05:52.003495525Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:05:52.013652 containerd[1469]: time="2026-03-10T01:05:52.013598522Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:05:52.015179 containerd[1469]: time="2026-03-10T01:05:52.014047621Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:05:52.015179 containerd[1469]: time="2026-03-10T01:05:52.014271298Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:05:52.015179 containerd[1469]: time="2026-03-10T01:05:52.014300684Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:05:52.015179 containerd[1469]: time="2026-03-10T01:05:52.014348763Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:05:52.015179 containerd[1469]: time="2026-03-10T01:05:52.014594752Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:05:52.015298 containerd[1469]: time="2026-03-10T01:05:52.015196836Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:05:52.015558 containerd[1469]: time="2026-03-10T01:05:52.015490785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:05:52.015558 containerd[1469]: time="2026-03-10T01:05:52.015549564Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:05:52.015737 containerd[1469]: time="2026-03-10T01:05:52.015606550Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:05:52.015737 containerd[1469]: time="2026-03-10T01:05:52.015632179Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.015737 containerd[1469]: time="2026-03-10T01:05:52.015658698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.015837 containerd[1469]: time="2026-03-10T01:05:52.015745400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.015837 containerd[1469]: time="2026-03-10T01:05:52.015767561Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.015837 containerd[1469]: time="2026-03-10T01:05:52.015787208Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.015837 containerd[1469]: time="2026-03-10T01:05:52.015808538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.015837 containerd[1469]: time="2026-03-10T01:05:52.015827984Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.016004 containerd[1469]: time="2026-03-10T01:05:52.015883839Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:05:52.016040 containerd[1469]: time="2026-03-10T01:05:52.016011166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016040 containerd[1469]: time="2026-03-10T01:05:52.016035902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016258 containerd[1469]: time="2026-03-10T01:05:52.016162288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016258 containerd[1469]: time="2026-03-10T01:05:52.016217752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016258 containerd[1469]: time="2026-03-10T01:05:52.016237608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016258 containerd[1469]: time="2026-03-10T01:05:52.016255842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016416 containerd[1469]: time="2026-03-10T01:05:52.016301458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016416 containerd[1469]: time="2026-03-10T01:05:52.016323900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016416 containerd[1469]: time="2026-03-10T01:05:52.016343627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016416 containerd[1469]: time="2026-03-10T01:05:52.016362922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016556 containerd[1469]: time="2026-03-10T01:05:52.016421081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016556 containerd[1469]: time="2026-03-10T01:05:52.016442912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016556 containerd[1469]: time="2026-03-10T01:05:52.016496562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016556 containerd[1469]: time="2026-03-10T01:05:52.016545283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:05:52.016773 containerd[1469]: time="2026-03-10T01:05:52.016616416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016773 containerd[1469]: time="2026-03-10T01:05:52.016638607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.016773 containerd[1469]: time="2026-03-10T01:05:52.016656651Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:05:52.017635 containerd[1469]: time="2026-03-10T01:05:52.017238867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:05:52.017635 containerd[1469]: time="2026-03-10T01:05:52.017276448Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:05:52.017635 containerd[1469]: time="2026-03-10T01:05:52.017382656Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:05:52.017635 containerd[1469]: time="2026-03-10T01:05:52.017403355Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:05:52.017888 containerd[1469]: time="2026-03-10T01:05:52.017644174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.018025 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:05:52.018428 containerd[1469]: time="2026-03-10T01:05:52.018040023Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:05:52.018464 containerd[1469]: time="2026-03-10T01:05:52.018424381Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:05:52.019959 containerd[1469]: time="2026-03-10T01:05:52.019551233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:05:52.024412 containerd[1469]: time="2026-03-10T01:05:52.024206753Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:05:52.024412 containerd[1469]: time="2026-03-10T01:05:52.024300408Z" level=info msg="Connect containerd service" Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.024541528Z" level=info msg="using legacy CRI server" Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.024557568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.026444290Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.030463351Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.031203302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.031281829Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:05:52.037089 containerd[1469]: time="2026-03-10T01:05:52.032197556Z" level=info msg="Start subscribing containerd event" Mar 10 01:05:52.038712 containerd[1469]: time="2026-03-10T01:05:52.038192276Z" level=info msg="Start recovering state" Mar 10 01:05:52.038712 containerd[1469]: time="2026-03-10T01:05:52.038426864Z" level=info msg="Start event monitor" Mar 10 01:05:52.038712 containerd[1469]: time="2026-03-10T01:05:52.038514948Z" level=info msg="Start snapshots syncer" Mar 10 01:05:52.038712 containerd[1469]: time="2026-03-10T01:05:52.038560673Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:05:52.038712 containerd[1469]: time="2026-03-10T01:05:52.038584417Z" level=info msg="Start streaming server" Mar 10 01:05:52.038966 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:05:52.039286 containerd[1469]: time="2026-03-10T01:05:52.039266591Z" level=info msg="containerd successfully booted in 0.279514s" Mar 10 01:05:52.290846 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:58284.service - OpenSSH per-connection server daemon (10.0.0.1:58284). Mar 10 01:05:52.314595 tar[1465]: linux-amd64/README.md Mar 10 01:05:52.387882 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:05:52.399239 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 58284 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:05:52.403032 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:52.411747 systemd-logind[1459]: New session 2 of user core. Mar 10 01:05:52.431316 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:05:52.507226 sshd[1560]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:52.522475 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:58284.service: Deactivated successfully. Mar 10 01:05:52.532165 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:05:52.534462 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:05:52.554380 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:58294.service - OpenSSH per-connection server daemon (10.0.0.1:58294). Mar 10 01:05:52.561855 systemd-logind[1459]: Removed session 2. Mar 10 01:05:52.680069 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:05:52.684018 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:52.696021 systemd-logind[1459]: New session 3 of user core. Mar 10 01:05:52.718295 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:05:52.814862 sshd[1570]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:52.821920 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:58294.service: Deactivated successfully. Mar 10 01:05:52.824899 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:05:52.826212 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:05:52.828183 systemd-logind[1459]: Removed session 3. Mar 10 01:05:54.506409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:54.512100 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:05:54.516617 systemd[1]: Startup finished in 3.706s (kernel) + 8.268s (initrd) + 9.874s (userspace) = 21.848s. Mar 10 01:05:54.526554 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:05:55.471812 kubelet[1581]: E0310 01:05:55.471381 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:05:55.477118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:05:55.477432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:05:55.478403 systemd[1]: kubelet.service: Consumed 3.599s CPU time. Mar 10 01:06:02.866130 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:52110.service - OpenSSH per-connection server daemon (10.0.0.1:52110). Mar 10 01:06:02.916835 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 52110 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:06:02.919453 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:02.929212 systemd-logind[1459]: New session 4 of user core. Mar 10 01:06:02.935987 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:06:03.002913 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:03.015915 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:52110.service: Deactivated successfully. Mar 10 01:06:03.018026 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:06:03.021057 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:06:03.032122 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:52112.service - OpenSSH per-connection server daemon (10.0.0.1:52112). Mar 10 01:06:03.033538 systemd-logind[1459]: Removed session 4. Mar 10 01:06:03.076649 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 52112 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:06:03.080560 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:03.104293 systemd-logind[1459]: New session 5 of user core. Mar 10 01:06:03.114075 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:06:03.173156 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:03.193767 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:52112.service: Deactivated successfully. Mar 10 01:06:03.197828 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:06:03.201492 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:06:03.217507 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:52114.service - OpenSSH per-connection server daemon (10.0.0.1:52114). Mar 10 01:06:03.219370 systemd-logind[1459]: Removed session 5. Mar 10 01:06:03.272791 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 52114 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:06:03.275913 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:03.284829 systemd-logind[1459]: New session 6 of user core. Mar 10 01:06:03.299001 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:06:03.362983 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:03.379019 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:52114.service: Deactivated successfully. Mar 10 01:06:03.382783 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:06:03.385194 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:06:03.402377 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:52126.service - OpenSSH per-connection server daemon (10.0.0.1:52126). Mar 10 01:06:03.404173 systemd-logind[1459]: Removed session 6. Mar 10 01:06:03.447153 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 52126 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:06:03.449609 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:03.455945 systemd-logind[1459]: New session 7 of user core. Mar 10 01:06:03.466163 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:06:03.561080 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:06:03.561945 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:06:03.590227 sudo[1619]: pam_unix(sudo:session): session closed for user root Mar 10 01:06:03.593792 sshd[1616]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:03.613578 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:52126.service: Deactivated successfully. Mar 10 01:06:03.615841 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:06:03.617791 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:06:03.630557 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:52134.service - OpenSSH per-connection server daemon (10.0.0.1:52134). Mar 10 01:06:03.632482 systemd-logind[1459]: Removed session 7. Mar 10 01:06:03.721344 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 52134 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:06:03.723801 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:03.732177 systemd-logind[1459]: New session 8 of user core. Mar 10 01:06:03.746989 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:06:03.845051 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:06:03.845980 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:06:03.855847 sudo[1628]: pam_unix(sudo:session): session closed for user root Mar 10 01:06:03.865761 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:06:03.866222 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:06:03.889198 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:06:03.893941 auditctl[1631]: No rules Mar 10 01:06:03.894594 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:06:03.895070 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:06:03.898637 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:06:03.978799 augenrules[1649]: No rules Mar 10 01:06:03.980043 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:06:03.982088 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 10 01:06:03.985872 sshd[1624]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:04.010551 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:52134.service: Deactivated successfully. Mar 10 01:06:04.013245 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:06:04.015602 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:06:04.029481 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:52138.service - OpenSSH per-connection server daemon (10.0.0.1:52138). Mar 10 01:06:04.031422 systemd-logind[1459]: Removed session 8. Mar 10 01:06:04.090410 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 52138 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:06:04.107079 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:04.126041 systemd-logind[1459]: New session 9 of user core. Mar 10 01:06:04.139038 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:06:04.205432 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:06:04.206143 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:06:04.580064 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:06:04.580236 (dockerd)[1680]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:06:04.933768 dockerd[1680]: time="2026-03-10T01:06:04.933435865Z" level=info msg="Starting up" Mar 10 01:06:05.085789 dockerd[1680]: time="2026-03-10T01:06:05.085643376Z" level=info msg="Loading containers: start." Mar 10 01:06:05.248745 kernel: Initializing XFRM netlink socket Mar 10 01:06:05.355073 systemd-networkd[1394]: docker0: Link UP Mar 10 01:06:05.378090 dockerd[1680]: time="2026-03-10T01:06:05.378002614Z" level=info msg="Loading containers: done." Mar 10 01:06:05.400867 dockerd[1680]: time="2026-03-10T01:06:05.400791794Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:06:05.401045 dockerd[1680]: time="2026-03-10T01:06:05.400921426Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:06:05.401073 dockerd[1680]: time="2026-03-10T01:06:05.401042863Z" level=info msg="Daemon has completed initialization" Mar 10 01:06:05.449881 dockerd[1680]: time="2026-03-10T01:06:05.449763032Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:06:05.450011 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:06:05.727882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:06:05.745975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:05.940313 containerd[1469]: time="2026-03-10T01:06:05.940132507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 10 01:06:06.034649 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck704882073-merged.mount: Deactivated successfully. Mar 10 01:06:06.112316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:06.122059 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:06:06.184623 kubelet[1837]: E0310 01:06:06.184558 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:06:06.189117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:06:06.189385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:06:06.556955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109474160.mount: Deactivated successfully. Mar 10 01:06:07.763997 containerd[1469]: time="2026-03-10T01:06:07.763912912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:07.764901 containerd[1469]: time="2026-03-10T01:06:07.764705160Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 10 01:06:07.766161 containerd[1469]: time="2026-03-10T01:06:07.766105033Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:07.769612 containerd[1469]: time="2026-03-10T01:06:07.769541327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:07.770874 containerd[1469]: time="2026-03-10T01:06:07.770818797Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.830599527s" Mar 10 01:06:07.770874 containerd[1469]: time="2026-03-10T01:06:07.770871305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 10 01:06:07.771873 containerd[1469]: time="2026-03-10T01:06:07.771704434Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 10 01:06:08.892199 containerd[1469]: time="2026-03-10T01:06:08.892038857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:08.893261 containerd[1469]: time="2026-03-10T01:06:08.893173307Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 10 01:06:08.894439 containerd[1469]: time="2026-03-10T01:06:08.894357446Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:08.897989 containerd[1469]: time="2026-03-10T01:06:08.897897583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:08.899739 containerd[1469]: time="2026-03-10T01:06:08.899708408Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.127933613s" Mar 10 01:06:08.899868 containerd[1469]: time="2026-03-10T01:06:08.899810539Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 10 01:06:08.901483 containerd[1469]: time="2026-03-10T01:06:08.901182420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 10 01:06:09.857758 containerd[1469]: time="2026-03-10T01:06:09.857641256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:09.858583 containerd[1469]: time="2026-03-10T01:06:09.858491375Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 10 01:06:09.859641 containerd[1469]: time="2026-03-10T01:06:09.859533398Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:09.863178 containerd[1469]: time="2026-03-10T01:06:09.863078680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:09.864370 containerd[1469]: time="2026-03-10T01:06:09.864245122Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 962.997462ms" Mar 10 01:06:09.864370 containerd[1469]: time="2026-03-10T01:06:09.864293673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 10 01:06:09.865013 containerd[1469]: time="2026-03-10T01:06:09.864873556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 10 01:06:11.139048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297798089.mount: Deactivated successfully. Mar 10 01:06:11.406457 containerd[1469]: time="2026-03-10T01:06:11.406280593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:11.407506 containerd[1469]: time="2026-03-10T01:06:11.407412135Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 10 01:06:11.408488 containerd[1469]: time="2026-03-10T01:06:11.408421970Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:11.411108 containerd[1469]: time="2026-03-10T01:06:11.411027437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:11.411596 containerd[1469]: time="2026-03-10T01:06:11.411555447Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.546524156s" Mar 10 01:06:11.411639 containerd[1469]: time="2026-03-10T01:06:11.411600762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 10 01:06:11.412421 containerd[1469]: time="2026-03-10T01:06:11.412249167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 10 01:06:11.863453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882720733.mount: Deactivated successfully. Mar 10 01:06:12.750269 containerd[1469]: time="2026-03-10T01:06:12.750165127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:12.751591 containerd[1469]: time="2026-03-10T01:06:12.751484567Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 10 01:06:12.752646 containerd[1469]: time="2026-03-10T01:06:12.752591323Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:12.758174 containerd[1469]: time="2026-03-10T01:06:12.758122901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:12.760241 containerd[1469]: time="2026-03-10T01:06:12.760141143Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.347826102s" Mar 10 01:06:12.760241 containerd[1469]: time="2026-03-10T01:06:12.760203149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 10 01:06:12.761066 containerd[1469]: time="2026-03-10T01:06:12.760988557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 01:06:13.160472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923801290.mount: Deactivated successfully. Mar 10 01:06:13.167546 containerd[1469]: time="2026-03-10T01:06:13.167399695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:13.168784 containerd[1469]: time="2026-03-10T01:06:13.168704850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 01:06:13.170116 containerd[1469]: time="2026-03-10T01:06:13.170046275Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:13.173912 containerd[1469]: time="2026-03-10T01:06:13.173818295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:13.174738 containerd[1469]: time="2026-03-10T01:06:13.174635116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 413.616723ms" Mar 10 01:06:13.174829 containerd[1469]: time="2026-03-10T01:06:13.174739401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 01:06:13.175557 containerd[1469]: time="2026-03-10T01:06:13.175385216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 10 01:06:13.610152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759000913.mount: Deactivated successfully. Mar 10 01:06:14.535335 containerd[1469]: time="2026-03-10T01:06:14.534976498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:14.536551 containerd[1469]: time="2026-03-10T01:06:14.535796108Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 10 01:06:14.537461 containerd[1469]: time="2026-03-10T01:06:14.537361449Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:14.541004 containerd[1469]: time="2026-03-10T01:06:14.540915715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:14.542658 containerd[1469]: time="2026-03-10T01:06:14.542584960Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.367150643s" Mar 10 01:06:14.542770 containerd[1469]: time="2026-03-10T01:06:14.542658759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 10 01:06:16.507177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:06:16.538894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:17.495378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:17.513269 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:06:17.650484 kubelet[2069]: E0310 01:06:17.650063 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:06:17.658841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:06:17.659197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:06:18.897103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:18.906034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:18.955399 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-9.scope)... Mar 10 01:06:18.955485 systemd[1]: Reloading... Mar 10 01:06:19.074856 zram_generator::config[2127]: No configuration found. Mar 10 01:06:19.303871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:06:19.402785 systemd[1]: Reloading finished in 446 ms. Mar 10 01:06:19.504325 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:19.514742 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:06:19.516036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:19.538237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:19.792514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:19.807307 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:06:19.980856 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:06:19.980856 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:06:19.980856 kubelet[2175]: I0310 01:06:19.980901 2175 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:06:20.406771 kubelet[2175]: I0310 01:06:20.406629 2175 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 01:06:20.406771 kubelet[2175]: I0310 01:06:20.406740 2175 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:06:20.408592 kubelet[2175]: I0310 01:06:20.408506 2175 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:06:20.408592 kubelet[2175]: I0310 01:06:20.408563 2175 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:06:20.409077 kubelet[2175]: I0310 01:06:20.409001 2175 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:06:20.506022 kubelet[2175]: I0310 01:06:20.505619 2175 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:06:20.506908 kubelet[2175]: E0310 01:06:20.506365 2175 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:06:20.523277 kubelet[2175]: E0310 01:06:20.521781 2175 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:06:20.524532 kubelet[2175]: I0310 01:06:20.523872 2175 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:06:20.545006 kubelet[2175]: I0310 01:06:20.543979 2175 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:06:20.553266 kubelet[2175]: I0310 01:06:20.550361 2175 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:06:20.555074 kubelet[2175]: I0310 01:06:20.553598 2175 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:06:20.555865 kubelet[2175]: I0310 01:06:20.555218 2175 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:06:20.555865 kubelet[2175]: I0310 01:06:20.555239 2175 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 01:06:20.556104 kubelet[2175]: I0310 01:06:20.556007 2175 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:06:20.567897 kubelet[2175]: I0310 01:06:20.567644 2175 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:06:20.568773 kubelet[2175]: I0310 01:06:20.568604 2175 kubelet.go:475] "Attempting to sync node with API server" Mar 10 01:06:20.568773 kubelet[2175]: I0310 01:06:20.568659 2175 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:06:20.568910 kubelet[2175]: I0310 01:06:20.568890 2175 kubelet.go:387] "Adding apiserver pod source" Mar 10 01:06:20.569254 kubelet[2175]: I0310 01:06:20.569083 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:06:20.570486 kubelet[2175]: E0310 01:06:20.570384 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:06:20.570615 kubelet[2175]: E0310 01:06:20.570547 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:06:20.575632 kubelet[2175]: I0310 01:06:20.575564 2175 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:06:20.576518 kubelet[2175]: I0310 01:06:20.576404 2175 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:06:20.576518 kubelet[2175]: I0310 01:06:20.576510 2175 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:06:20.576855 kubelet[2175]: W0310 01:06:20.576792 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:06:20.581410 kubelet[2175]: I0310 01:06:20.581282 2175 server.go:1262] "Started kubelet" Mar 10 01:06:20.581535 kubelet[2175]: I0310 01:06:20.581507 2175 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:06:20.581709 kubelet[2175]: I0310 01:06:20.581574 2175 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:06:20.582602 kubelet[2175]: I0310 01:06:20.582545 2175 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:06:20.582751 kubelet[2175]: I0310 01:06:20.582386 2175 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:06:20.584535 kubelet[2175]: I0310 01:06:20.583797 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:06:20.593014 kubelet[2175]: I0310 01:06:20.592807 2175 server.go:310] "Adding debug handlers to kubelet server" Mar 10 01:06:20.602519 kubelet[2175]: I0310 01:06:20.602244 2175 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:06:20.608635 kubelet[2175]: E0310 01:06:20.600943 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b55675416b2f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:06:20.581188338 +0000 UTC m=+0.766198534,LastTimestamp:2026-03-10 01:06:20.581188338 +0000 UTC m=+0.766198534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:06:20.608635 kubelet[2175]: I0310 01:06:20.606217 2175 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 01:06:20.608635 kubelet[2175]: I0310 01:06:20.606419 2175 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:06:20.608635 kubelet[2175]: I0310 01:06:20.606584 2175 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:06:20.608635 kubelet[2175]: E0310 01:06:20.606655 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:20.608635 kubelet[2175]: E0310 01:06:20.607230 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Mar 10 01:06:20.608635 kubelet[2175]: E0310 01:06:20.607420 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:06:20.610094 kubelet[2175]: I0310 01:06:20.609985 2175 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:06:20.610165 kubelet[2175]: I0310 01:06:20.610146 2175 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:06:20.612606 kubelet[2175]: E0310 01:06:20.612508 2175 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:06:20.614017 kubelet[2175]: I0310 01:06:20.613975 2175 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:06:20.711622 kubelet[2175]: E0310 01:06:20.711283 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:20.725310 kubelet[2175]: I0310 01:06:20.724927 2175 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:06:20.746130 kubelet[2175]: I0310 01:06:20.745794 2175 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:06:20.746130 kubelet[2175]: I0310 01:06:20.745982 2175 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 01:06:20.753960 kubelet[2175]: I0310 01:06:20.751618 2175 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 01:06:20.753960 kubelet[2175]: E0310 01:06:20.752797 2175 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:06:20.754164 kubelet[2175]: E0310 01:06:20.754065 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:06:20.765121 kubelet[2175]: I0310 01:06:20.764902 2175 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:06:20.765909 kubelet[2175]: I0310 01:06:20.765887 2175 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:06:20.766005 kubelet[2175]: I0310 01:06:20.765993 2175 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:06:20.769910 kubelet[2175]: I0310 01:06:20.769850 2175 policy_none.go:49] "None policy: Start" Mar 10 01:06:20.769987 kubelet[2175]: I0310 01:06:20.769952 2175 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:06:20.769987 kubelet[2175]: I0310 01:06:20.769978 2175 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:06:20.772355 kubelet[2175]: I0310 01:06:20.772289 2175 policy_none.go:47] "Start" Mar 10 01:06:20.799192 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:06:20.808346 kubelet[2175]: E0310 01:06:20.808264 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Mar 10 01:06:20.813326 kubelet[2175]: E0310 01:06:20.812950 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:20.854765 kubelet[2175]: E0310 01:06:20.854210 2175 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:06:20.888465 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:06:20.942143 kubelet[2175]: E0310 01:06:20.941758 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:20.957658 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:06:20.960351 kubelet[2175]: E0310 01:06:20.960309 2175 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:06:20.960876 kubelet[2175]: I0310 01:06:20.960784 2175 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:06:20.960912 kubelet[2175]: I0310 01:06:20.960843 2175 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:06:20.961945 kubelet[2175]: I0310 01:06:20.961814 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:06:20.965777 kubelet[2175]: E0310 01:06:20.965495 2175 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:06:20.965777 kubelet[2175]: E0310 01:06:20.965582 2175 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:06:21.062391 kubelet[2175]: I0310 01:06:21.062327 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:21.063214 kubelet[2175]: E0310 01:06:21.063125 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 01:06:21.073265 systemd[1]: Created slice kubepods-burstable-pode40fdf347db04b8c8ebedd4f03a81ce6.slice - libcontainer container kubepods-burstable-pode40fdf347db04b8c8ebedd4f03a81ce6.slice. Mar 10 01:06:21.090537 kubelet[2175]: E0310 01:06:21.090479 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:21.095299 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 10 01:06:21.099228 kubelet[2175]: E0310 01:06:21.099128 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:21.103852 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 10 01:06:21.108534 kubelet[2175]: E0310 01:06:21.108414 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:21.141356 kubelet[2175]: I0310 01:06:21.141159 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e40fdf347db04b8c8ebedd4f03a81ce6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e40fdf347db04b8c8ebedd4f03a81ce6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:21.141356 kubelet[2175]: I0310 01:06:21.141258 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:21.141356 kubelet[2175]: I0310 01:06:21.141295 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:21.141788 kubelet[2175]: I0310 01:06:21.141399 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:21.141788 kubelet[2175]: I0310 01:06:21.141560 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:21.141788 kubelet[2175]: I0310 01:06:21.141610 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:21.141788 kubelet[2175]: I0310 01:06:21.141712 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:21.141788 kubelet[2175]: I0310 01:06:21.141731 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e40fdf347db04b8c8ebedd4f03a81ce6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40fdf347db04b8c8ebedd4f03a81ce6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:21.141954 kubelet[2175]: I0310 01:06:21.141768 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e40fdf347db04b8c8ebedd4f03a81ce6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40fdf347db04b8c8ebedd4f03a81ce6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:21.247273 kubelet[2175]: E0310 01:06:21.246417 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Mar 10 01:06:21.288963 kubelet[2175]: I0310 01:06:21.286384 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:21.293011 kubelet[2175]: E0310 01:06:21.292901 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 01:06:21.401370 kubelet[2175]: E0310 01:06:21.400732 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:21.405867 kubelet[2175]: E0310 01:06:21.404868 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:21.406008 containerd[1469]: time="2026-03-10T01:06:21.405471044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e40fdf347db04b8c8ebedd4f03a81ce6,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:21.406008 containerd[1469]: time="2026-03-10T01:06:21.405806819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:21.412516 kubelet[2175]: E0310 01:06:21.412397 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:21.413545 containerd[1469]: time="2026-03-10T01:06:21.413329648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:21.559726 kubelet[2175]: E0310 01:06:21.559292 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:06:21.711528 kubelet[2175]: I0310 01:06:21.711146 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:21.712912 kubelet[2175]: E0310 01:06:21.712633 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 01:06:21.993290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216040165.mount: Deactivated successfully. Mar 10 01:06:22.015146 containerd[1469]: time="2026-03-10T01:06:22.014802027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:06:22.028893 containerd[1469]: time="2026-03-10T01:06:22.028346613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:06:22.031179 containerd[1469]: time="2026-03-10T01:06:22.031044271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:06:22.034598 containerd[1469]: time="2026-03-10T01:06:22.033197522Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:06:22.036270 containerd[1469]: time="2026-03-10T01:06:22.036181556Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:06:22.037788 containerd[1469]: time="2026-03-10T01:06:22.037638986Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:06:22.039282 containerd[1469]: time="2026-03-10T01:06:22.038959682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:06:22.040240 kubelet[2175]: E0310 01:06:22.040139 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:06:22.045209 containerd[1469]: time="2026-03-10T01:06:22.044974913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:06:22.046967 containerd[1469]: time="2026-03-10T01:06:22.046871606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 640.931784ms" Mar 10 01:06:22.047610 kubelet[2175]: E0310 01:06:22.047495 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Mar 10 01:06:22.048468 containerd[1469]: time="2026-03-10T01:06:22.048363759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.847031ms" Mar 10 01:06:22.054131 kubelet[2175]: E0310 01:06:22.054024 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:06:22.056259 containerd[1469]: time="2026-03-10T01:06:22.056158132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.218129ms" Mar 10 01:06:22.758551 kubelet[2175]: E0310 01:06:22.758082 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:06:22.758551 kubelet[2175]: E0310 01:06:22.758740 2175 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:06:22.762821 kubelet[2175]: I0310 01:06:22.762642 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:22.765030 kubelet[2175]: E0310 01:06:22.763652 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 01:06:22.958143 containerd[1469]: time="2026-03-10T01:06:22.956831504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:22.958143 containerd[1469]: time="2026-03-10T01:06:22.957044796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:22.958143 containerd[1469]: time="2026-03-10T01:06:22.957068761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:22.958143 containerd[1469]: time="2026-03-10T01:06:22.957534915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:22.962349 containerd[1469]: time="2026-03-10T01:06:22.961785680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:22.962349 containerd[1469]: time="2026-03-10T01:06:22.962175946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:22.966078 containerd[1469]: time="2026-03-10T01:06:22.965839036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:22.966078 containerd[1469]: time="2026-03-10T01:06:22.965961881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:22.968570 containerd[1469]: time="2026-03-10T01:06:22.968366961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:22.968640 containerd[1469]: time="2026-03-10T01:06:22.968575954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:22.968640 containerd[1469]: time="2026-03-10T01:06:22.968612932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:22.968927 containerd[1469]: time="2026-03-10T01:06:22.968851851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:23.648811 systemd[1]: Started cri-containerd-69e3c6c90aef1625dfe0e4f961d7e5235c2c658f61237f2a1227e046c4f3502b.scope - libcontainer container 69e3c6c90aef1625dfe0e4f961d7e5235c2c658f61237f2a1227e046c4f3502b. Mar 10 01:06:23.652261 kubelet[2175]: E0310 01:06:23.652173 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="3.2s" Mar 10 01:06:23.666993 systemd[1]: Started cri-containerd-4e58b80957c1ab9d903b737a60837b6573b4b82b13a0e292c7eccebf3bd3e41b.scope - libcontainer container 4e58b80957c1ab9d903b737a60837b6573b4b82b13a0e292c7eccebf3bd3e41b. Mar 10 01:06:23.674468 systemd[1]: Started cri-containerd-ab92a7437065db50790b1e4412a4c67d86b8c3de1ea1ee0f6b1ac347baf53a82.scope - libcontainer container ab92a7437065db50790b1e4412a4c67d86b8c3de1ea1ee0f6b1ac347baf53a82. Mar 10 01:06:23.797920 containerd[1469]: time="2026-03-10T01:06:23.797014661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e58b80957c1ab9d903b737a60837b6573b4b82b13a0e292c7eccebf3bd3e41b\"" Mar 10 01:06:23.809928 containerd[1469]: time="2026-03-10T01:06:23.809798007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e3c6c90aef1625dfe0e4f961d7e5235c2c658f61237f2a1227e046c4f3502b\"" Mar 10 01:06:23.836510 kubelet[2175]: E0310 01:06:23.836452 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:23.837144 kubelet[2175]: E0310 01:06:23.836902 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:23.838015 containerd[1469]: time="2026-03-10T01:06:23.837906030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e40fdf347db04b8c8ebedd4f03a81ce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab92a7437065db50790b1e4412a4c67d86b8c3de1ea1ee0f6b1ac347baf53a82\"" Mar 10 01:06:23.840052 kubelet[2175]: E0310 01:06:23.840026 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:23.845994 containerd[1469]: time="2026-03-10T01:06:23.845887566Z" level=info msg="CreateContainer within sandbox \"69e3c6c90aef1625dfe0e4f961d7e5235c2c658f61237f2a1227e046c4f3502b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:06:23.847770 containerd[1469]: time="2026-03-10T01:06:23.847640956Z" level=info msg="CreateContainer within sandbox \"4e58b80957c1ab9d903b737a60837b6573b4b82b13a0e292c7eccebf3bd3e41b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:06:23.850459 containerd[1469]: time="2026-03-10T01:06:23.850354130Z" level=info msg="CreateContainer within sandbox \"ab92a7437065db50790b1e4412a4c67d86b8c3de1ea1ee0f6b1ac347baf53a82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:06:24.069844 containerd[1469]: time="2026-03-10T01:06:24.064354669Z" level=info msg="CreateContainer within sandbox \"69e3c6c90aef1625dfe0e4f961d7e5235c2c658f61237f2a1227e046c4f3502b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f80e311500bb853bab4e3eeb4e7eb14ceddc7d0adff7d84cb835c4c22ddc1a7d\"" Mar 10 01:06:24.183727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834138324.mount: Deactivated successfully. Mar 10 01:06:24.184512 kubelet[2175]: E0310 01:06:24.170560 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:06:24.189613 containerd[1469]: time="2026-03-10T01:06:24.189582329Z" level=info msg="StartContainer for \"f80e311500bb853bab4e3eeb4e7eb14ceddc7d0adff7d84cb835c4c22ddc1a7d\"" Mar 10 01:06:24.197703 containerd[1469]: time="2026-03-10T01:06:24.197591113Z" level=info msg="CreateContainer within sandbox \"ab92a7437065db50790b1e4412a4c67d86b8c3de1ea1ee0f6b1ac347baf53a82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b58ab0c206d9a32af7f779515028765e1617cab7060649485137b311b3bc3dd\"" Mar 10 01:06:24.200575 containerd[1469]: time="2026-03-10T01:06:24.198852610Z" level=info msg="StartContainer for \"6b58ab0c206d9a32af7f779515028765e1617cab7060649485137b311b3bc3dd\"" Mar 10 01:06:24.205290 containerd[1469]: time="2026-03-10T01:06:24.205249348Z" level=info msg="CreateContainer within sandbox \"4e58b80957c1ab9d903b737a60837b6573b4b82b13a0e292c7eccebf3bd3e41b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f8ffe66489742232470562efba7301fab8638f7cf9285d456b1fb4519bdd294\"" Mar 10 01:06:24.206187 containerd[1469]: time="2026-03-10T01:06:24.206155986Z" level=info msg="StartContainer for \"4f8ffe66489742232470562efba7301fab8638f7cf9285d456b1fb4519bdd294\"" Mar 10 01:06:24.376178 systemd[1]: Started cri-containerd-4f8ffe66489742232470562efba7301fab8638f7cf9285d456b1fb4519bdd294.scope - libcontainer container 4f8ffe66489742232470562efba7301fab8638f7cf9285d456b1fb4519bdd294. Mar 10 01:06:24.396795 kubelet[2175]: I0310 01:06:24.396205 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:24.398198 systemd[1]: Started cri-containerd-f80e311500bb853bab4e3eeb4e7eb14ceddc7d0adff7d84cb835c4c22ddc1a7d.scope - libcontainer container f80e311500bb853bab4e3eeb4e7eb14ceddc7d0adff7d84cb835c4c22ddc1a7d. Mar 10 01:06:24.479961 kubelet[2175]: E0310 01:06:24.410909 2175 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 01:06:24.497978 systemd[1]: Started cri-containerd-6b58ab0c206d9a32af7f779515028765e1617cab7060649485137b311b3bc3dd.scope - libcontainer container 6b58ab0c206d9a32af7f779515028765e1617cab7060649485137b311b3bc3dd. Mar 10 01:06:24.563438 kubelet[2175]: E0310 01:06:24.560526 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b55675416b2f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:06:20.581188338 +0000 UTC m=+0.766198534,LastTimestamp:2026-03-10 01:06:20.581188338 +0000 UTC m=+0.766198534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:06:24.671213 kubelet[2175]: E0310 01:06:24.670440 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:06:24.677772 containerd[1469]: time="2026-03-10T01:06:24.676551830Z" level=info msg="StartContainer for \"f80e311500bb853bab4e3eeb4e7eb14ceddc7d0adff7d84cb835c4c22ddc1a7d\" returns successfully" Mar 10 01:06:24.677772 containerd[1469]: time="2026-03-10T01:06:24.676636557Z" level=info msg="StartContainer for \"6b58ab0c206d9a32af7f779515028765e1617cab7060649485137b311b3bc3dd\" returns successfully" Mar 10 01:06:24.691096 containerd[1469]: time="2026-03-10T01:06:24.690909644Z" level=info msg="StartContainer for \"4f8ffe66489742232470562efba7301fab8638f7cf9285d456b1fb4519bdd294\" returns successfully" Mar 10 01:06:24.738001 kubelet[2175]: E0310 01:06:24.737928 2175 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:06:24.800243 kubelet[2175]: E0310 01:06:24.800080 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:24.800432 kubelet[2175]: E0310 01:06:24.800315 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:24.802131 kubelet[2175]: E0310 01:06:24.802018 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:24.802393 kubelet[2175]: E0310 01:06:24.802233 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:24.809598 kubelet[2175]: E0310 01:06:24.809533 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:24.810035 kubelet[2175]: E0310 01:06:24.809855 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:25.879622 kubelet[2175]: E0310 01:06:25.878774 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:25.880637 kubelet[2175]: E0310 01:06:25.879893 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:25.882708 kubelet[2175]: E0310 01:06:25.881225 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:25.882708 kubelet[2175]: E0310 01:06:25.881444 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:26.880710 kubelet[2175]: E0310 01:06:26.880410 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:26.881768 kubelet[2175]: E0310 01:06:26.881098 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:27.427627 kubelet[2175]: E0310 01:06:27.427536 2175 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:27.430710 kubelet[2175]: E0310 01:06:27.427926 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:27.760530 kubelet[2175]: I0310 01:06:27.757172 2175 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:27.782224 kubelet[2175]: E0310 01:06:27.782154 2175 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:06:27.882043 kubelet[2175]: I0310 01:06:27.881835 2175 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:06:27.882043 kubelet[2175]: E0310 01:06:27.881881 2175 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 10 01:06:27.910713 kubelet[2175]: E0310 01:06:27.909122 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.013739 kubelet[2175]: E0310 01:06:28.009850 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.111391 kubelet[2175]: E0310 01:06:28.111235 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.211919 kubelet[2175]: E0310 01:06:28.211755 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.312924 kubelet[2175]: E0310 01:06:28.312631 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.413967 kubelet[2175]: E0310 01:06:28.413810 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.523375 kubelet[2175]: E0310 01:06:28.521753 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.622340 kubelet[2175]: E0310 01:06:28.622173 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.725850 kubelet[2175]: E0310 01:06:28.722812 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.832930 kubelet[2175]: E0310 01:06:28.832383 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:28.934268 kubelet[2175]: E0310 01:06:28.933750 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.034341 kubelet[2175]: E0310 01:06:29.034178 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.135643 kubelet[2175]: E0310 01:06:29.135521 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.236033 kubelet[2175]: E0310 01:06:29.235821 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.337373 kubelet[2175]: E0310 01:06:29.337123 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.438542 kubelet[2175]: E0310 01:06:29.438437 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.539464 kubelet[2175]: E0310 01:06:29.539159 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.640322 kubelet[2175]: E0310 01:06:29.640179 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.741561 kubelet[2175]: E0310 01:06:29.741410 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.842744 kubelet[2175]: E0310 01:06:29.842398 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:29.942788 kubelet[2175]: E0310 01:06:29.942644 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:30.043412 kubelet[2175]: E0310 01:06:30.043197 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:30.151370 kubelet[2175]: E0310 01:06:30.150331 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:30.255235 kubelet[2175]: E0310 01:06:30.254636 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:30.372395 kubelet[2175]: E0310 01:06:30.371154 2175 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:30.536390 kubelet[2175]: I0310 01:06:30.517403 2175 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:30.568424 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-9.scope)... Mar 10 01:06:30.568460 systemd[1]: Reloading... Mar 10 01:06:30.777081 kubelet[2175]: I0310 01:06:30.774151 2175 apiserver.go:52] "Watching apiserver" Mar 10 01:06:30.798575 kubelet[2175]: I0310 01:06:30.796371 2175 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:30.890757 kubelet[2175]: I0310 01:06:30.887628 2175 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:30.913946 kubelet[2175]: I0310 01:06:30.913377 2175 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:06:31.006536 kubelet[2175]: E0310 01:06:31.002613 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:31.097139 kubelet[2175]: E0310 01:06:31.096891 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:31.098955 kubelet[2175]: E0310 01:06:31.097370 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:31.164904 zram_generator::config[2511]: No configuration found. Mar 10 01:06:31.344967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:06:31.479215 systemd[1]: Reloading finished in 905 ms. Mar 10 01:06:31.544084 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:31.574639 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:06:31.575330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:31.575431 systemd[1]: kubelet.service: Consumed 5.158s CPU time, 128.4M memory peak, 0B memory swap peak. Mar 10 01:06:31.594155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:31.814832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:31.823876 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:06:31.897449 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:06:31.897449 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:06:31.897449 kubelet[2556]: I0310 01:06:31.897240 2556 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:06:31.908380 kubelet[2556]: I0310 01:06:31.908309 2556 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 01:06:31.908380 kubelet[2556]: I0310 01:06:31.908347 2556 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:06:31.908380 kubelet[2556]: I0310 01:06:31.908377 2556 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:06:31.908380 kubelet[2556]: I0310 01:06:31.908384 2556 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:06:31.908585 kubelet[2556]: I0310 01:06:31.908541 2556 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:06:31.910835 kubelet[2556]: I0310 01:06:31.910644 2556 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:06:31.915961 kubelet[2556]: I0310 01:06:31.915936 2556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:06:31.921875 kubelet[2556]: E0310 01:06:31.921761 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:06:31.921875 kubelet[2556]: I0310 01:06:31.921820 2556 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:06:31.930436 sudo[2572]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 10 01:06:31.931856 sudo[2572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 10 01:06:31.935473 kubelet[2556]: I0310 01:06:31.935446 2556 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:06:31.935847 kubelet[2556]: I0310 01:06:31.935777 2556 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:06:31.936012 kubelet[2556]: I0310 01:06:31.935831 2556 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:06:31.936012 kubelet[2556]: I0310 01:06:31.936009 2556 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:06:31.936189 kubelet[2556]: I0310 01:06:31.936025 2556 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 01:06:31.936189 kubelet[2556]: I0310 01:06:31.936064 2556 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:06:31.936366 kubelet[2556]: I0310 01:06:31.936325 2556 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:06:31.936526 kubelet[2556]: I0310 01:06:31.936489 2556 kubelet.go:475] "Attempting to sync node with API server" Mar 10 01:06:31.936526 kubelet[2556]: I0310 01:06:31.936525 2556 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:06:31.936579 kubelet[2556]: I0310 01:06:31.936549 2556 kubelet.go:387] "Adding apiserver pod source" Mar 10 01:06:31.936579 kubelet[2556]: I0310 01:06:31.936564 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:06:31.941596 kubelet[2556]: I0310 01:06:31.941348 2556 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:06:31.941850 kubelet[2556]: I0310 01:06:31.941812 2556 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:06:31.941903 kubelet[2556]: I0310 01:06:31.941854 2556 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:06:31.948638 kubelet[2556]: I0310 01:06:31.948499 2556 server.go:1262] "Started kubelet" Mar 10 01:06:31.950327 kubelet[2556]: I0310 01:06:31.950090 2556 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:06:31.950327 kubelet[2556]: I0310 01:06:31.950208 2556 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:06:31.950639 kubelet[2556]: I0310 01:06:31.950589 2556 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:06:31.950907 kubelet[2556]: I0310 01:06:31.950782 2556 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:06:31.952251 kubelet[2556]: I0310 01:06:31.952144 2556 server.go:310] "Adding debug handlers to kubelet server" Mar 10 01:06:31.953659 kubelet[2556]: I0310 01:06:31.953593 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:06:31.958008 kubelet[2556]: I0310 01:06:31.957876 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:06:31.961370 kubelet[2556]: E0310 01:06:31.960995 2556 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:06:31.962492 kubelet[2556]: I0310 01:06:31.962135 2556 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 01:06:31.965360 kubelet[2556]: I0310 01:06:31.965245 2556 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:06:31.965360 kubelet[2556]: I0310 01:06:31.965329 2556 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:06:31.967866 kubelet[2556]: I0310 01:06:31.967545 2556 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:06:31.971172 kubelet[2556]: I0310 01:06:31.970950 2556 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:06:31.981090 kubelet[2556]: I0310 01:06:31.981001 2556 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:06:31.992916 kubelet[2556]: I0310 01:06:31.992796 2556 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:06:32.019799 kubelet[2556]: I0310 01:06:32.019326 2556 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:06:32.019799 kubelet[2556]: I0310 01:06:32.019418 2556 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 01:06:32.019799 kubelet[2556]: I0310 01:06:32.019535 2556 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 01:06:32.020077 kubelet[2556]: E0310 01:06:32.019780 2556 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:06:32.103048 kubelet[2556]: I0310 01:06:32.102982 2556 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:06:32.103048 kubelet[2556]: I0310 01:06:32.103044 2556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:06:32.103225 kubelet[2556]: I0310 01:06:32.103098 2556 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:06:32.103975 kubelet[2556]: I0310 01:06:32.103346 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 01:06:32.103975 kubelet[2556]: I0310 01:06:32.103368 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 01:06:32.103975 kubelet[2556]: I0310 01:06:32.103391 2556 policy_none.go:49] "None policy: Start" Mar 10 01:06:32.103975 kubelet[2556]: I0310 01:06:32.103404 2556 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:06:32.103975 kubelet[2556]: I0310 01:06:32.103420 2556 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:06:32.104540 kubelet[2556]: I0310 01:06:32.104460 2556 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 01:06:32.104768 kubelet[2556]: I0310 01:06:32.104624 2556 policy_none.go:47] "Start" Mar 10 01:06:32.120440 kubelet[2556]: E0310 01:06:32.120361 2556 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:06:32.121106 kubelet[2556]: E0310 01:06:32.121035 2556 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:06:32.121739 kubelet[2556]: I0310 01:06:32.121511 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:06:32.121816 kubelet[2556]: I0310 01:06:32.121613 2556 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:06:32.125801 kubelet[2556]: I0310 01:06:32.125734 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:06:32.129726 kubelet[2556]: E0310 01:06:32.126736 2556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:06:32.237363 kubelet[2556]: I0310 01:06:32.237229 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:32.248250 kubelet[2556]: I0310 01:06:32.248193 2556 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 01:06:32.248250 kubelet[2556]: I0310 01:06:32.248329 2556 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:06:32.324845 kubelet[2556]: I0310 01:06:32.322393 2556 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:32.325936 kubelet[2556]: I0310 01:06:32.325907 2556 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:32.327061 kubelet[2556]: I0310 01:06:32.326956 2556 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.332099 kubelet[2556]: E0310 01:06:32.332029 2556 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:32.332828 kubelet[2556]: E0310 01:06:32.332758 2556 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:32.334970 kubelet[2556]: E0310 01:06:32.334769 2556 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.367790 kubelet[2556]: I0310 01:06:32.367535 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e40fdf347db04b8c8ebedd4f03a81ce6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40fdf347db04b8c8ebedd4f03a81ce6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:32.367790 kubelet[2556]: I0310 01:06:32.367589 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.367790 kubelet[2556]: I0310 01:06:32.367609 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.367790 kubelet[2556]: I0310 01:06:32.367623 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.367790 kubelet[2556]: I0310 01:06:32.367642 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.368111 kubelet[2556]: I0310 01:06:32.367713 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e40fdf347db04b8c8ebedd4f03a81ce6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40fdf347db04b8c8ebedd4f03a81ce6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:32.368111 kubelet[2556]: I0310 01:06:32.367735 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e40fdf347db04b8c8ebedd4f03a81ce6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e40fdf347db04b8c8ebedd4f03a81ce6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:32.368111 kubelet[2556]: I0310 01:06:32.367749 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:32.368111 kubelet[2556]: I0310 01:06:32.367764 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:32.633540 kubelet[2556]: E0310 01:06:32.633393 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:32.634214 kubelet[2556]: E0310 01:06:32.633970 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:32.636529 kubelet[2556]: E0310 01:06:32.636463 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:32.637175 sudo[2572]: pam_unix(sudo:session): session closed for user root Mar 10 01:06:32.938655 kubelet[2556]: I0310 01:06:32.938483 2556 apiserver.go:52] "Watching apiserver" Mar 10 01:06:32.965943 kubelet[2556]: I0310 01:06:32.965829 2556 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:06:33.046727 kubelet[2556]: I0310 01:06:33.046617 2556 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:33.048707 kubelet[2556]: E0310 01:06:33.046869 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:33.048707 kubelet[2556]: E0310 01:06:33.047228 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:33.064804 kubelet[2556]: I0310 01:06:33.064635 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.064548043 podStartE2EDuration="3.064548043s" podCreationTimestamp="2026-03-10 01:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:33.064524974 +0000 UTC m=+1.229132548" watchObservedRunningTime="2026-03-10 01:06:33.064548043 +0000 UTC m=+1.229155566" Mar 10 01:06:33.065060 kubelet[2556]: E0310 01:06:33.064995 2556 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:33.065239 kubelet[2556]: E0310 01:06:33.065163 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:33.100220 kubelet[2556]: I0310 01:06:33.099980 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.099962222 podStartE2EDuration="3.099962222s" podCreationTimestamp="2026-03-10 01:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:33.088611553 +0000 UTC m=+1.253219106" watchObservedRunningTime="2026-03-10 01:06:33.099962222 +0000 UTC m=+1.264569746" Mar 10 01:06:33.115379 kubelet[2556]: I0310 01:06:33.114394 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.114370884 podStartE2EDuration="3.114370884s" podCreationTimestamp="2026-03-10 01:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:33.100231992 +0000 UTC m=+1.264839525" watchObservedRunningTime="2026-03-10 01:06:33.114370884 +0000 UTC m=+1.278978417" Mar 10 01:06:34.049171 kubelet[2556]: E0310 01:06:34.048825 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:34.049171 kubelet[2556]: E0310 01:06:34.048873 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:35.066119 kubelet[2556]: E0310 01:06:35.064362 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:35.672133 update_engine[1461]: I20260310 01:06:35.671813 1461 update_attempter.cc:509] Updating boot flags... Mar 10 01:06:35.672857 sudo[1660]: pam_unix(sudo:session): session closed for user root Mar 10 01:06:35.676038 sshd[1657]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:35.683005 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:06:35.683509 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:52138.service: Deactivated successfully. Mar 10 01:06:35.687400 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:06:35.687844 systemd[1]: session-9.scope: Consumed 9.482s CPU time, 157.9M memory peak, 0B memory swap peak. Mar 10 01:06:35.689466 systemd-logind[1459]: Removed session 9. Mar 10 01:06:35.974168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2644) Mar 10 01:06:36.182448 kubelet[2556]: E0310 01:06:36.182345 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:36.261877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2646) Mar 10 01:06:37.340965 kubelet[2556]: I0310 01:06:37.339782 2556 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:06:37.342567 kubelet[2556]: I0310 01:06:37.341323 2556 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:06:37.342770 containerd[1469]: time="2026-03-10T01:06:37.340808061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:06:38.098500 kubelet[2556]: E0310 01:06:38.098405 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:38.122379 systemd[1]: Created slice kubepods-besteffort-podb75a7196_8b85_4127_8098_fa03a67b2a9c.slice - libcontainer container kubepods-besteffort-podb75a7196_8b85_4127_8098_fa03a67b2a9c.slice. Mar 10 01:06:38.166372 systemd[1]: Created slice kubepods-burstable-podf095890f_9d2b_4988_99a9_a592f713f464.slice - libcontainer container kubepods-burstable-podf095890f_9d2b_4988_99a9_a592f713f464.slice. Mar 10 01:06:38.233815 kubelet[2556]: E0310 01:06:38.233585 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:38.271821 kubelet[2556]: I0310 01:06:38.263618 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b75a7196-8b85-4127-8098-fa03a67b2a9c-xtables-lock\") pod \"kube-proxy-zknk5\" (UID: \"b75a7196-8b85-4127-8098-fa03a67b2a9c\") " pod="kube-system/kube-proxy-zknk5" Mar 10 01:06:38.271821 kubelet[2556]: I0310 01:06:38.271504 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b75a7196-8b85-4127-8098-fa03a67b2a9c-lib-modules\") pod \"kube-proxy-zknk5\" (UID: \"b75a7196-8b85-4127-8098-fa03a67b2a9c\") " pod="kube-system/kube-proxy-zknk5" Mar 10 01:06:38.271821 kubelet[2556]: I0310 01:06:38.271598 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f095890f-9d2b-4988-99a9-a592f713f464-cilium-config-path\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.271821 kubelet[2556]: I0310 01:06:38.271622 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-net\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.271821 kubelet[2556]: I0310 01:06:38.271643 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-hubble-tls\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.271821 kubelet[2556]: I0310 01:06:38.271758 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-etc-cni-netd\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274064 kubelet[2556]: I0310 01:06:38.271781 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f095890f-9d2b-4988-99a9-a592f713f464-clustermesh-secrets\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274064 kubelet[2556]: I0310 01:06:38.271804 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-kernel\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274064 kubelet[2556]: I0310 01:06:38.271827 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-run\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274064 kubelet[2556]: I0310 01:06:38.271847 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-hostproc\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274064 kubelet[2556]: I0310 01:06:38.271867 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cni-path\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274064 kubelet[2556]: I0310 01:06:38.271889 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4knz\" (UniqueName: \"kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-kube-api-access-h4knz\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274991 kubelet[2556]: I0310 01:06:38.271916 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b75a7196-8b85-4127-8098-fa03a67b2a9c-kube-proxy\") pod \"kube-proxy-zknk5\" (UID: \"b75a7196-8b85-4127-8098-fa03a67b2a9c\") " pod="kube-system/kube-proxy-zknk5" Mar 10 01:06:38.274991 kubelet[2556]: I0310 01:06:38.271938 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpwx\" (UniqueName: \"kubernetes.io/projected/b75a7196-8b85-4127-8098-fa03a67b2a9c-kube-api-access-sgpwx\") pod \"kube-proxy-zknk5\" (UID: \"b75a7196-8b85-4127-8098-fa03a67b2a9c\") " pod="kube-system/kube-proxy-zknk5" Mar 10 01:06:38.274991 kubelet[2556]: I0310 01:06:38.271987 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-bpf-maps\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274991 kubelet[2556]: I0310 01:06:38.272100 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-cgroup\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274991 kubelet[2556]: I0310 01:06:38.272178 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-lib-modules\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.274991 kubelet[2556]: I0310 01:06:38.272227 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-xtables-lock\") pod \"cilium-gqnbh\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " pod="kube-system/cilium-gqnbh" Mar 10 01:06:38.501794 kubelet[2556]: E0310 01:06:38.497731 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:38.502362 containerd[1469]: time="2026-03-10T01:06:38.500067281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqnbh,Uid:f095890f-9d2b-4988-99a9-a592f713f464,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:38.542994 systemd[1]: Created slice kubepods-besteffort-podd2bea28b_0c8d_4628_b260_d01f23d60891.slice - libcontainer container kubepods-besteffort-podd2bea28b_0c8d_4628_b260_d01f23d60891.slice. Mar 10 01:06:38.673060 containerd[1469]: time="2026-03-10T01:06:38.657935935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:38.673060 containerd[1469]: time="2026-03-10T01:06:38.668818443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:38.673060 containerd[1469]: time="2026-03-10T01:06:38.668836207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:38.673060 containerd[1469]: time="2026-03-10T01:06:38.669753479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:38.681118 kubelet[2556]: I0310 01:06:38.680408 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2bea28b-0c8d-4628-b260-d01f23d60891-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-7ljrl\" (UID: \"d2bea28b-0c8d-4628-b260-d01f23d60891\") " pod="kube-system/cilium-operator-6f9c7c5859-7ljrl" Mar 10 01:06:38.681118 kubelet[2556]: I0310 01:06:38.680738 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drlj9\" (UniqueName: \"kubernetes.io/projected/d2bea28b-0c8d-4628-b260-d01f23d60891-kube-api-access-drlj9\") pod \"cilium-operator-6f9c7c5859-7ljrl\" (UID: \"d2bea28b-0c8d-4628-b260-d01f23d60891\") " pod="kube-system/cilium-operator-6f9c7c5859-7ljrl" Mar 10 01:06:38.873080 kubelet[2556]: E0310 01:06:38.872765 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:38.873637 systemd[1]: Started cri-containerd-25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d.scope - libcontainer container 25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d. Mar 10 01:06:38.876914 containerd[1469]: time="2026-03-10T01:06:38.876609863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zknk5,Uid:b75a7196-8b85-4127-8098-fa03a67b2a9c,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:38.991627 containerd[1469]: time="2026-03-10T01:06:38.991426419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqnbh,Uid:f095890f-9d2b-4988-99a9-a592f713f464,Namespace:kube-system,Attempt:0,} returns sandbox id \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\"" Mar 10 01:06:38.995399 kubelet[2556]: E0310 01:06:38.993983 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:38.998491 containerd[1469]: time="2026-03-10T01:06:38.998439259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 10 01:06:39.010282 containerd[1469]: time="2026-03-10T01:06:39.009564073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:39.010282 containerd[1469]: time="2026-03-10T01:06:39.009645043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:39.010282 containerd[1469]: time="2026-03-10T01:06:39.009738086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:39.010282 containerd[1469]: time="2026-03-10T01:06:39.009877235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:39.062952 systemd[1]: Started cri-containerd-3996989f8938377ef17cbc38b1c7a5d41effd3a41e26a150a8de68f9f4ccdda5.scope - libcontainer container 3996989f8938377ef17cbc38b1c7a5d41effd3a41e26a150a8de68f9f4ccdda5. Mar 10 01:06:39.191407 kubelet[2556]: E0310 01:06:39.189973 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:39.195976 containerd[1469]: time="2026-03-10T01:06:39.193527580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-7ljrl,Uid:d2bea28b-0c8d-4628-b260-d01f23d60891,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:39.198653 containerd[1469]: time="2026-03-10T01:06:39.198603590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zknk5,Uid:b75a7196-8b85-4127-8098-fa03a67b2a9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3996989f8938377ef17cbc38b1c7a5d41effd3a41e26a150a8de68f9f4ccdda5\"" Mar 10 01:06:39.200434 kubelet[2556]: E0310 01:06:39.200359 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:39.210878 containerd[1469]: time="2026-03-10T01:06:39.210793052Z" level=info msg="CreateContainer within sandbox \"3996989f8938377ef17cbc38b1c7a5d41effd3a41e26a150a8de68f9f4ccdda5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:06:39.262843 containerd[1469]: time="2026-03-10T01:06:39.262655535Z" level=info msg="CreateContainer within sandbox \"3996989f8938377ef17cbc38b1c7a5d41effd3a41e26a150a8de68f9f4ccdda5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64d839a852c7d8c16ea27b1956c8a2d6646dddee5bbc753e0960b194d5270916\"" Mar 10 01:06:39.263645 containerd[1469]: time="2026-03-10T01:06:39.263586012Z" level=info msg="StartContainer for \"64d839a852c7d8c16ea27b1956c8a2d6646dddee5bbc753e0960b194d5270916\"" Mar 10 01:06:39.315145 containerd[1469]: time="2026-03-10T01:06:39.309831264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:39.315145 containerd[1469]: time="2026-03-10T01:06:39.310434122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:39.315145 containerd[1469]: time="2026-03-10T01:06:39.310464519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:39.315145 containerd[1469]: time="2026-03-10T01:06:39.311422938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:39.417970 systemd[1]: Started cri-containerd-64d839a852c7d8c16ea27b1956c8a2d6646dddee5bbc753e0960b194d5270916.scope - libcontainer container 64d839a852c7d8c16ea27b1956c8a2d6646dddee5bbc753e0960b194d5270916. Mar 10 01:06:39.441000 systemd[1]: Started cri-containerd-9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca.scope - libcontainer container 9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca. Mar 10 01:06:39.508301 containerd[1469]: time="2026-03-10T01:06:39.508038019Z" level=info msg="StartContainer for \"64d839a852c7d8c16ea27b1956c8a2d6646dddee5bbc753e0960b194d5270916\" returns successfully" Mar 10 01:06:39.746819 containerd[1469]: time="2026-03-10T01:06:39.746330797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-7ljrl,Uid:d2bea28b-0c8d-4628-b260-d01f23d60891,Namespace:kube-system,Attempt:0,} returns sandbox id \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\"" Mar 10 01:06:39.754731 kubelet[2556]: E0310 01:06:39.752117 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:40.268820 kubelet[2556]: E0310 01:06:40.268547 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:41.289165 kubelet[2556]: E0310 01:06:41.288948 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:42.075453 kubelet[2556]: I0310 01:06:42.075041 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zknk5" podStartSLOduration=5.075017696 podStartE2EDuration="5.075017696s" podCreationTimestamp="2026-03-10 01:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:40.302964599 +0000 UTC m=+8.467572123" watchObservedRunningTime="2026-03-10 01:06:42.075017696 +0000 UTC m=+10.239625370" Mar 10 01:06:43.565132 kubelet[2556]: E0310 01:06:43.564605 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:47.846992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681907555.mount: Deactivated successfully. Mar 10 01:06:53.407869 containerd[1469]: time="2026-03-10T01:06:53.407511739Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:53.408894 containerd[1469]: time="2026-03-10T01:06:53.408622389Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 10 01:06:53.410251 containerd[1469]: time="2026-03-10T01:06:53.410160530Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:53.412838 containerd[1469]: time="2026-03-10T01:06:53.412772650Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.414276695s" Mar 10 01:06:53.412935 containerd[1469]: time="2026-03-10T01:06:53.412842411Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 10 01:06:53.414601 containerd[1469]: time="2026-03-10T01:06:53.414560047Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 10 01:06:53.425393 containerd[1469]: time="2026-03-10T01:06:53.425134510Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:06:53.447181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500275552.mount: Deactivated successfully. Mar 10 01:06:53.452399 containerd[1469]: time="2026-03-10T01:06:53.452315496Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\"" Mar 10 01:06:53.453360 containerd[1469]: time="2026-03-10T01:06:53.453165355Z" level=info msg="StartContainer for \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\"" Mar 10 01:06:53.575971 systemd[1]: Started cri-containerd-aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34.scope - libcontainer container aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34. Mar 10 01:06:53.646116 containerd[1469]: time="2026-03-10T01:06:53.645999326Z" level=info msg="StartContainer for \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\" returns successfully" Mar 10 01:06:53.662109 systemd[1]: cri-containerd-aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34.scope: Deactivated successfully. Mar 10 01:06:53.794234 containerd[1469]: time="2026-03-10T01:06:53.794004516Z" level=info msg="shim disconnected" id=aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34 namespace=k8s.io Mar 10 01:06:53.794234 containerd[1469]: time="2026-03-10T01:06:53.794158554Z" level=warning msg="cleaning up after shim disconnected" id=aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34 namespace=k8s.io Mar 10 01:06:53.794234 containerd[1469]: time="2026-03-10T01:06:53.794177969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:06:54.574826 systemd[1]: run-containerd-runc-k8s.io-aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34-runc.baNIQE.mount: Deactivated successfully. Mar 10 01:06:54.576632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34-rootfs.mount: Deactivated successfully. Mar 10 01:06:54.636094 kubelet[2556]: E0310 01:06:54.635583 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:54.645050 containerd[1469]: time="2026-03-10T01:06:54.645013930Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:06:54.678507 containerd[1469]: time="2026-03-10T01:06:54.678380508Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\"" Mar 10 01:06:54.684498 containerd[1469]: time="2026-03-10T01:06:54.683499818Z" level=info msg="StartContainer for \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\"" Mar 10 01:06:54.810116 systemd[1]: Started cri-containerd-2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a.scope - libcontainer container 2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a. Mar 10 01:06:54.904069 containerd[1469]: time="2026-03-10T01:06:54.903540506Z" level=info msg="StartContainer for \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\" returns successfully" Mar 10 01:06:54.915869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:06:54.916242 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:06:54.916417 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:06:54.930194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:06:54.931430 systemd[1]: cri-containerd-2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a.scope: Deactivated successfully. Mar 10 01:06:55.047984 containerd[1469]: time="2026-03-10T01:06:55.037103595Z" level=info msg="shim disconnected" id=2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a namespace=k8s.io Mar 10 01:06:55.050021 containerd[1469]: time="2026-03-10T01:06:55.049903277Z" level=warning msg="cleaning up after shim disconnected" id=2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a namespace=k8s.io Mar 10 01:06:55.050021 containerd[1469]: time="2026-03-10T01:06:55.049973719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:06:55.064365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:06:55.079831 containerd[1469]: time="2026-03-10T01:06:55.078079032Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:06:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:06:55.511054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a-rootfs.mount: Deactivated successfully. Mar 10 01:06:55.641953 kubelet[2556]: E0310 01:06:55.641782 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:55.651929 containerd[1469]: time="2026-03-10T01:06:55.651849528Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:06:55.680568 containerd[1469]: time="2026-03-10T01:06:55.680489756Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\"" Mar 10 01:06:55.681765 containerd[1469]: time="2026-03-10T01:06:55.681622766Z" level=info msg="StartContainer for \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\"" Mar 10 01:06:55.742062 systemd[1]: Started cri-containerd-b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4.scope - libcontainer container b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4. Mar 10 01:06:55.790939 systemd[1]: cri-containerd-b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4.scope: Deactivated successfully. Mar 10 01:06:55.841135 containerd[1469]: time="2026-03-10T01:06:55.840973772Z" level=info msg="StartContainer for \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\" returns successfully" Mar 10 01:06:55.889212 containerd[1469]: time="2026-03-10T01:06:55.888876135Z" level=info msg="shim disconnected" id=b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4 namespace=k8s.io Mar 10 01:06:55.889212 containerd[1469]: time="2026-03-10T01:06:55.888958498Z" level=warning msg="cleaning up after shim disconnected" id=b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4 namespace=k8s.io Mar 10 01:06:55.889212 containerd[1469]: time="2026-03-10T01:06:55.888974478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:06:56.312180 containerd[1469]: time="2026-03-10T01:06:56.312006242Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:56.313162 containerd[1469]: time="2026-03-10T01:06:56.313059528Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 10 01:06:56.314563 containerd[1469]: time="2026-03-10T01:06:56.314474544Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:06:56.316018 containerd[1469]: time="2026-03-10T01:06:56.315906531Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.901312971s" Mar 10 01:06:56.316018 containerd[1469]: time="2026-03-10T01:06:56.315963557Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 10 01:06:56.327820 containerd[1469]: time="2026-03-10T01:06:56.327748082Z" level=info msg="CreateContainer within sandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 10 01:06:56.347244 containerd[1469]: time="2026-03-10T01:06:56.347125152Z" level=info msg="CreateContainer within sandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\"" Mar 10 01:06:56.348792 containerd[1469]: time="2026-03-10T01:06:56.347912569Z" level=info msg="StartContainer for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\"" Mar 10 01:06:56.397486 systemd[1]: Started cri-containerd-1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607.scope - libcontainer container 1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607. Mar 10 01:06:56.439819 containerd[1469]: time="2026-03-10T01:06:56.439530749Z" level=info msg="StartContainer for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" returns successfully" Mar 10 01:06:56.513455 systemd[1]: run-containerd-runc-k8s.io-b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4-runc.EFb23W.mount: Deactivated successfully. Mar 10 01:06:56.513986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4-rootfs.mount: Deactivated successfully. Mar 10 01:06:56.659531 kubelet[2556]: E0310 01:06:56.659493 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:56.672776 containerd[1469]: time="2026-03-10T01:06:56.671060258Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:06:56.673491 kubelet[2556]: E0310 01:06:56.671164 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:56.733149 containerd[1469]: time="2026-03-10T01:06:56.732835712Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\"" Mar 10 01:06:56.734279 containerd[1469]: time="2026-03-10T01:06:56.734203029Z" level=info msg="StartContainer for \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\"" Mar 10 01:06:56.847178 systemd[1]: Started cri-containerd-2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f.scope - libcontainer container 2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f. Mar 10 01:06:56.936958 systemd[1]: cri-containerd-2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f.scope: Deactivated successfully. Mar 10 01:06:56.942053 containerd[1469]: time="2026-03-10T01:06:56.941966129Z" level=info msg="StartContainer for \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\" returns successfully" Mar 10 01:06:57.096742 containerd[1469]: time="2026-03-10T01:06:57.095490268Z" level=info msg="shim disconnected" id=2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f namespace=k8s.io Mar 10 01:06:57.096742 containerd[1469]: time="2026-03-10T01:06:57.096366551Z" level=warning msg="cleaning up after shim disconnected" id=2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f namespace=k8s.io Mar 10 01:06:57.096742 containerd[1469]: time="2026-03-10T01:06:57.096383873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:06:57.511656 systemd[1]: run-containerd-runc-k8s.io-2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f-runc.QbZK4j.mount: Deactivated successfully. Mar 10 01:06:57.511993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f-rootfs.mount: Deactivated successfully. Mar 10 01:06:57.680218 kubelet[2556]: E0310 01:06:57.679652 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:57.680218 kubelet[2556]: E0310 01:06:57.679876 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:57.689759 containerd[1469]: time="2026-03-10T01:06:57.689634539Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:06:57.708611 kubelet[2556]: I0310 01:06:57.708236 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-7ljrl" podStartSLOduration=3.144194736 podStartE2EDuration="19.708212344s" podCreationTimestamp="2026-03-10 01:06:38 +0000 UTC" firstStartedPulling="2026-03-10 01:06:39.753120598 +0000 UTC m=+7.917728121" lastFinishedPulling="2026-03-10 01:06:56.317138206 +0000 UTC m=+24.481745729" observedRunningTime="2026-03-10 01:06:56.80391506 +0000 UTC m=+24.968522584" watchObservedRunningTime="2026-03-10 01:06:57.708212344 +0000 UTC m=+25.872819877" Mar 10 01:06:57.732182 containerd[1469]: time="2026-03-10T01:06:57.732105287Z" level=info msg="CreateContainer within sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\"" Mar 10 01:06:57.733986 containerd[1469]: time="2026-03-10T01:06:57.733913120Z" level=info msg="StartContainer for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\"" Mar 10 01:06:57.812060 systemd[1]: Started cri-containerd-f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e.scope - libcontainer container f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e. Mar 10 01:06:57.872103 containerd[1469]: time="2026-03-10T01:06:57.871933972Z" level=info msg="StartContainer for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" returns successfully" Mar 10 01:06:58.210802 kubelet[2556]: I0310 01:06:58.210656 2556 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 10 01:06:58.296882 systemd[1]: Created slice kubepods-burstable-podb0d04c06_9b13_43fa_b45f_1a6c75f0954e.slice - libcontainer container kubepods-burstable-podb0d04c06_9b13_43fa_b45f_1a6c75f0954e.slice. Mar 10 01:06:58.306414 systemd[1]: Created slice kubepods-burstable-pod72299766_4eb7_4fec_824f_b534c7b4b268.slice - libcontainer container kubepods-burstable-pod72299766_4eb7_4fec_824f_b534c7b4b268.slice. Mar 10 01:06:58.339050 kubelet[2556]: I0310 01:06:58.338990 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d04c06-9b13-43fa-b45f-1a6c75f0954e-config-volume\") pod \"coredns-66bc5c9577-5c9dz\" (UID: \"b0d04c06-9b13-43fa-b45f-1a6c75f0954e\") " pod="kube-system/coredns-66bc5c9577-5c9dz" Mar 10 01:06:58.339050 kubelet[2556]: I0310 01:06:58.339034 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxvcb\" (UniqueName: \"kubernetes.io/projected/72299766-4eb7-4fec-824f-b534c7b4b268-kube-api-access-dxvcb\") pod \"coredns-66bc5c9577-9tcqh\" (UID: \"72299766-4eb7-4fec-824f-b534c7b4b268\") " pod="kube-system/coredns-66bc5c9577-9tcqh" Mar 10 01:06:58.339050 kubelet[2556]: I0310 01:06:58.339056 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclwk\" (UniqueName: \"kubernetes.io/projected/b0d04c06-9b13-43fa-b45f-1a6c75f0954e-kube-api-access-fclwk\") pod \"coredns-66bc5c9577-5c9dz\" (UID: \"b0d04c06-9b13-43fa-b45f-1a6c75f0954e\") " pod="kube-system/coredns-66bc5c9577-5c9dz" Mar 10 01:06:58.339470 kubelet[2556]: I0310 01:06:58.339072 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72299766-4eb7-4fec-824f-b534c7b4b268-config-volume\") pod \"coredns-66bc5c9577-9tcqh\" (UID: \"72299766-4eb7-4fec-824f-b534c7b4b268\") " pod="kube-system/coredns-66bc5c9577-9tcqh" Mar 10 01:06:58.514430 systemd[1]: run-containerd-runc-k8s.io-f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e-runc.WS7CA1.mount: Deactivated successfully. Mar 10 01:06:58.616389 kubelet[2556]: E0310 01:06:58.615985 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:58.621727 kubelet[2556]: E0310 01:06:58.618618 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:58.660444 containerd[1469]: time="2026-03-10T01:06:58.659998609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9tcqh,Uid:72299766-4eb7-4fec-824f-b534c7b4b268,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:58.675158 containerd[1469]: time="2026-03-10T01:06:58.674452591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5c9dz,Uid:b0d04c06-9b13-43fa-b45f-1a6c75f0954e,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:58.690170 kubelet[2556]: E0310 01:06:58.689843 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:58.731364 kubelet[2556]: I0310 01:06:58.731117 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gqnbh" podStartSLOduration=7.314173261 podStartE2EDuration="21.731023036s" podCreationTimestamp="2026-03-10 01:06:37 +0000 UTC" firstStartedPulling="2026-03-10 01:06:38.997358148 +0000 UTC m=+7.161965681" lastFinishedPulling="2026-03-10 01:06:53.414207933 +0000 UTC m=+21.578815456" observedRunningTime="2026-03-10 01:06:58.717885947 +0000 UTC m=+26.882493499" watchObservedRunningTime="2026-03-10 01:06:58.731023036 +0000 UTC m=+26.895630599" Mar 10 01:06:59.700004 kubelet[2556]: E0310 01:06:59.699648 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:00.709771 kubelet[2556]: E0310 01:07:00.707776 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:01.161483 systemd-networkd[1394]: cilium_host: Link UP Mar 10 01:07:01.161753 systemd-networkd[1394]: cilium_net: Link UP Mar 10 01:07:01.161759 systemd-networkd[1394]: cilium_net: Gained carrier Mar 10 01:07:01.163824 systemd-networkd[1394]: cilium_host: Gained carrier Mar 10 01:07:01.320642 systemd-networkd[1394]: cilium_vxlan: Link UP Mar 10 01:07:01.320759 systemd-networkd[1394]: cilium_vxlan: Gained carrier Mar 10 01:07:01.631881 kernel: NET: Registered PF_ALG protocol family Mar 10 01:07:01.662213 systemd-networkd[1394]: cilium_net: Gained IPv6LL Mar 10 01:07:01.720067 systemd-networkd[1394]: cilium_host: Gained IPv6LL Mar 10 01:07:02.665973 systemd-networkd[1394]: lxc_health: Link UP Mar 10 01:07:02.674235 systemd-networkd[1394]: lxc_health: Gained carrier Mar 10 01:07:02.678914 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Mar 10 01:07:02.946574 systemd-networkd[1394]: lxc2497bcf7fedd: Link UP Mar 10 01:07:02.966048 systemd-networkd[1394]: lxc68b108c3fbc2: Link UP Mar 10 01:07:02.969771 kernel: eth0: renamed from tmpac401 Mar 10 01:07:02.979888 kernel: eth0: renamed from tmpd15c2 Mar 10 01:07:02.988630 systemd-networkd[1394]: lxc2497bcf7fedd: Gained carrier Mar 10 01:07:02.991979 systemd-networkd[1394]: lxc68b108c3fbc2: Gained carrier Mar 10 01:07:04.023818 systemd-networkd[1394]: lxc68b108c3fbc2: Gained IPv6LL Mar 10 01:07:04.151404 systemd-networkd[1394]: lxc2497bcf7fedd: Gained IPv6LL Mar 10 01:07:04.551280 kubelet[2556]: E0310 01:07:04.550910 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:04.598592 systemd-networkd[1394]: lxc_health: Gained IPv6LL Mar 10 01:07:04.737295 kubelet[2556]: E0310 01:07:04.737153 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:06.155652 kubelet[2556]: E0310 01:07:06.155511 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:09.471912 containerd[1469]: time="2026-03-10T01:07:09.470636773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:07:09.472777 containerd[1469]: time="2026-03-10T01:07:09.472146198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:07:09.472777 containerd[1469]: time="2026-03-10T01:07:09.472258548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:09.472777 containerd[1469]: time="2026-03-10T01:07:09.472588563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:09.512976 systemd[1]: Started cri-containerd-ac4011fbf0ba3db0292175003a416fe2c584533f0a6318479d97933b2578eded.scope - libcontainer container ac4011fbf0ba3db0292175003a416fe2c584533f0a6318479d97933b2578eded. Mar 10 01:07:09.549565 containerd[1469]: time="2026-03-10T01:07:09.549338046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:07:09.549565 containerd[1469]: time="2026-03-10T01:07:09.549492495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:07:09.549565 containerd[1469]: time="2026-03-10T01:07:09.549515648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:09.549944 containerd[1469]: time="2026-03-10T01:07:09.549751769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:09.558458 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:07:09.598504 systemd[1]: Started cri-containerd-d15c2043149ece431f9da13b28aed492b4e037eeac3062b15e5fd1dea838544c.scope - libcontainer container d15c2043149ece431f9da13b28aed492b4e037eeac3062b15e5fd1dea838544c. Mar 10 01:07:09.634236 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:07:09.640103 containerd[1469]: time="2026-03-10T01:07:09.639913917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9tcqh,Uid:72299766-4eb7-4fec-824f-b534c7b4b268,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac4011fbf0ba3db0292175003a416fe2c584533f0a6318479d97933b2578eded\"" Mar 10 01:07:09.643312 kubelet[2556]: E0310 01:07:09.643236 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:09.654004 containerd[1469]: time="2026-03-10T01:07:09.653466803Z" level=info msg="CreateContainer within sandbox \"ac4011fbf0ba3db0292175003a416fe2c584533f0a6318479d97933b2578eded\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:07:09.680529 containerd[1469]: time="2026-03-10T01:07:09.680337804Z" level=info msg="CreateContainer within sandbox \"ac4011fbf0ba3db0292175003a416fe2c584533f0a6318479d97933b2578eded\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"077da8cf216568cf9824dc7878adbabb894859bd7360a9864f7b4266d1495c57\"" Mar 10 01:07:09.684491 containerd[1469]: time="2026-03-10T01:07:09.684362302Z" level=info msg="StartContainer for \"077da8cf216568cf9824dc7878adbabb894859bd7360a9864f7b4266d1495c57\"" Mar 10 01:07:09.689020 containerd[1469]: time="2026-03-10T01:07:09.688890795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5c9dz,Uid:b0d04c06-9b13-43fa-b45f-1a6c75f0954e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d15c2043149ece431f9da13b28aed492b4e037eeac3062b15e5fd1dea838544c\"" Mar 10 01:07:09.690799 kubelet[2556]: E0310 01:07:09.690015 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:09.698926 containerd[1469]: time="2026-03-10T01:07:09.698583141Z" level=info msg="CreateContainer within sandbox \"d15c2043149ece431f9da13b28aed492b4e037eeac3062b15e5fd1dea838544c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:07:09.745504 containerd[1469]: time="2026-03-10T01:07:09.745263152Z" level=info msg="CreateContainer within sandbox \"d15c2043149ece431f9da13b28aed492b4e037eeac3062b15e5fd1dea838544c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b73fbd6a96ae2fa3cf3095a3b4c617d9b64acda7afbecfde5a8d4a076b79ab58\"" Mar 10 01:07:09.749217 containerd[1469]: time="2026-03-10T01:07:09.747965013Z" level=info msg="StartContainer for \"b73fbd6a96ae2fa3cf3095a3b4c617d9b64acda7afbecfde5a8d4a076b79ab58\"" Mar 10 01:07:09.770021 systemd[1]: Started cri-containerd-077da8cf216568cf9824dc7878adbabb894859bd7360a9864f7b4266d1495c57.scope - libcontainer container 077da8cf216568cf9824dc7878adbabb894859bd7360a9864f7b4266d1495c57. Mar 10 01:07:09.811377 systemd[1]: Started cri-containerd-b73fbd6a96ae2fa3cf3095a3b4c617d9b64acda7afbecfde5a8d4a076b79ab58.scope - libcontainer container b73fbd6a96ae2fa3cf3095a3b4c617d9b64acda7afbecfde5a8d4a076b79ab58. Mar 10 01:07:09.846872 containerd[1469]: time="2026-03-10T01:07:09.846620272Z" level=info msg="StartContainer for \"077da8cf216568cf9824dc7878adbabb894859bd7360a9864f7b4266d1495c57\" returns successfully" Mar 10 01:07:09.876512 containerd[1469]: time="2026-03-10T01:07:09.876360892Z" level=info msg="StartContainer for \"b73fbd6a96ae2fa3cf3095a3b4c617d9b64acda7afbecfde5a8d4a076b79ab58\" returns successfully" Mar 10 01:07:10.184278 kubelet[2556]: E0310 01:07:10.184104 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:10.187977 kubelet[2556]: E0310 01:07:10.187854 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:10.216889 kubelet[2556]: I0310 01:07:10.216258 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9tcqh" podStartSLOduration=32.216216313 podStartE2EDuration="32.216216313s" podCreationTimestamp="2026-03-10 01:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:07:10.214130713 +0000 UTC m=+38.378738276" watchObservedRunningTime="2026-03-10 01:07:10.216216313 +0000 UTC m=+38.380823866" Mar 10 01:07:11.196037 kubelet[2556]: E0310 01:07:11.195782 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:11.196037 kubelet[2556]: E0310 01:07:11.195782 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:11.214239 kubelet[2556]: I0310 01:07:11.214117 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5c9dz" podStartSLOduration=33.214092184 podStartE2EDuration="33.214092184s" podCreationTimestamp="2026-03-10 01:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:07:10.248757235 +0000 UTC m=+38.413364778" watchObservedRunningTime="2026-03-10 01:07:11.214092184 +0000 UTC m=+39.378699717" Mar 10 01:07:12.198905 kubelet[2556]: E0310 01:07:12.198777 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:12.198905 kubelet[2556]: E0310 01:07:12.198889 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:13.207594 kubelet[2556]: E0310 01:07:13.207341 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:25.930169 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:56902.service - OpenSSH per-connection server daemon (10.0.0.1:56902). Mar 10 01:07:25.987895 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 56902 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:25.990023 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:25.995881 systemd-logind[1459]: New session 10 of user core. Mar 10 01:07:26.006871 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:07:26.203900 sshd[3973]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:26.209125 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:56902.service: Deactivated successfully. Mar 10 01:07:26.212508 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:07:26.214850 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:07:26.216424 systemd-logind[1459]: Removed session 10. Mar 10 01:07:31.221978 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:37274.service - OpenSSH per-connection server daemon (10.0.0.1:37274). Mar 10 01:07:31.260328 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 37274 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:31.262205 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:31.268092 systemd-logind[1459]: New session 11 of user core. Mar 10 01:07:31.275921 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:07:31.395039 sshd[3996]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:31.400282 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:37274.service: Deactivated successfully. Mar 10 01:07:31.403464 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:07:31.404610 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:07:31.406423 systemd-logind[1459]: Removed session 11. Mar 10 01:07:36.406331 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:37288.service - OpenSSH per-connection server daemon (10.0.0.1:37288). Mar 10 01:07:36.446103 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 37288 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:36.447978 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:36.452988 systemd-logind[1459]: New session 12 of user core. Mar 10 01:07:36.459833 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:07:36.586358 sshd[4013]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:36.591013 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:37288.service: Deactivated successfully. Mar 10 01:07:36.594435 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:07:36.595972 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:07:36.597244 systemd-logind[1459]: Removed session 12. Mar 10 01:07:41.602733 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:33712.service - OpenSSH per-connection server daemon (10.0.0.1:33712). Mar 10 01:07:41.658637 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 33712 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:41.660479 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:41.666163 systemd-logind[1459]: New session 13 of user core. Mar 10 01:07:41.675843 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:07:41.789266 sshd[4030]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:41.800615 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:33712.service: Deactivated successfully. Mar 10 01:07:41.802524 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:07:41.804239 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:07:41.809013 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:33720.service - OpenSSH per-connection server daemon (10.0.0.1:33720). Mar 10 01:07:41.810230 systemd-logind[1459]: Removed session 13. Mar 10 01:07:41.846915 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 33720 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:41.848625 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:41.854178 systemd-logind[1459]: New session 14 of user core. Mar 10 01:07:41.863825 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:07:42.022108 kubelet[2556]: E0310 01:07:42.022036 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:42.029545 sshd[4046]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:42.040236 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:33720.service: Deactivated successfully. Mar 10 01:07:42.042430 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:07:42.044782 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:07:42.056731 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:33734.service - OpenSSH per-connection server daemon (10.0.0.1:33734). Mar 10 01:07:42.061114 systemd-logind[1459]: Removed session 14. Mar 10 01:07:42.094166 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 33734 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:42.096285 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:42.101962 systemd-logind[1459]: New session 15 of user core. Mar 10 01:07:42.112924 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:07:42.238747 sshd[4058]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:42.243790 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:33734.service: Deactivated successfully. Mar 10 01:07:42.246092 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:07:42.247196 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:07:42.248562 systemd-logind[1459]: Removed session 15. Mar 10 01:07:47.020821 kubelet[2556]: E0310 01:07:47.020658 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:47.251655 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Mar 10 01:07:47.301784 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:47.304396 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:47.310457 systemd-logind[1459]: New session 16 of user core. Mar 10 01:07:47.316855 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:07:47.438889 sshd[4076]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:47.443654 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:33736.service: Deactivated successfully. Mar 10 01:07:47.445934 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:07:47.446879 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:07:47.448193 systemd-logind[1459]: Removed session 16. Mar 10 01:07:49.020507 kubelet[2556]: E0310 01:07:49.020424 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:52.460105 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:42266.service - OpenSSH per-connection server daemon (10.0.0.1:42266). Mar 10 01:07:52.499086 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 42266 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:52.501336 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:52.508477 systemd-logind[1459]: New session 17 of user core. Mar 10 01:07:52.513976 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:07:52.645802 sshd[4090]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:52.654858 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:42266.service: Deactivated successfully. Mar 10 01:07:52.657829 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:07:52.660339 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:07:52.668332 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). Mar 10 01:07:52.669875 systemd-logind[1459]: Removed session 17. Mar 10 01:07:52.707102 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:52.709645 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:52.716352 systemd-logind[1459]: New session 18 of user core. Mar 10 01:07:52.730090 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:07:53.073994 sshd[4105]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:53.087254 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:42268.service: Deactivated successfully. Mar 10 01:07:53.090512 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:07:53.093922 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:07:53.105183 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:42270.service - OpenSSH per-connection server daemon (10.0.0.1:42270). Mar 10 01:07:53.107421 systemd-logind[1459]: Removed session 18. Mar 10 01:07:53.149211 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 42270 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:53.151277 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:53.157546 systemd-logind[1459]: New session 19 of user core. Mar 10 01:07:53.165883 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:07:53.990011 sshd[4117]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:53.999456 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:42270.service: Deactivated successfully. Mar 10 01:07:54.002583 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:07:54.005428 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:07:54.017249 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:42274.service - OpenSSH per-connection server daemon (10.0.0.1:42274). Mar 10 01:07:54.025657 systemd-logind[1459]: Removed session 19. Mar 10 01:07:54.070587 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 42274 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:54.072740 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:54.078952 systemd-logind[1459]: New session 20 of user core. Mar 10 01:07:54.089928 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:07:54.367138 sshd[4135]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:54.384949 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:42274.service: Deactivated successfully. Mar 10 01:07:54.388323 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:07:54.390899 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:07:54.415251 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:42282.service - OpenSSH per-connection server daemon (10.0.0.1:42282). Mar 10 01:07:54.417153 systemd-logind[1459]: Removed session 20. Mar 10 01:07:54.454382 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 42282 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:54.456562 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:54.463505 systemd-logind[1459]: New session 21 of user core. Mar 10 01:07:54.473937 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:07:54.605478 sshd[4147]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:54.611362 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:42282.service: Deactivated successfully. Mar 10 01:07:54.614033 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:07:54.615110 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:07:54.616722 systemd-logind[1459]: Removed session 21. Mar 10 01:07:55.021981 kubelet[2556]: E0310 01:07:55.021326 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:58.020767 kubelet[2556]: E0310 01:07:58.020609 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:59.619101 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:48872.service - OpenSSH per-connection server daemon (10.0.0.1:48872). Mar 10 01:07:59.674983 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 48872 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:07:59.676905 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:59.682577 systemd-logind[1459]: New session 22 of user core. Mar 10 01:07:59.687966 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:07:59.806640 sshd[4164]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:59.812075 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:48872.service: Deactivated successfully. Mar 10 01:07:59.814172 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:07:59.815381 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:07:59.817347 systemd-logind[1459]: Removed session 22. Mar 10 01:08:04.818822 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:48878.service - OpenSSH per-connection server daemon (10.0.0.1:48878). Mar 10 01:08:04.859219 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 48878 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:08:04.861167 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:08:04.867454 systemd-logind[1459]: New session 23 of user core. Mar 10 01:08:04.880924 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:08:05.002194 sshd[4180]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:05.006800 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:48878.service: Deactivated successfully. Mar 10 01:08:05.008962 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:08:05.010147 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:08:05.012368 systemd-logind[1459]: Removed session 23. Mar 10 01:08:10.024375 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:34678.service - OpenSSH per-connection server daemon (10.0.0.1:34678). Mar 10 01:08:10.067177 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 34678 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:08:10.070027 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:08:10.077937 systemd-logind[1459]: New session 24 of user core. Mar 10 01:08:10.092018 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:08:10.237115 sshd[4194]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:10.251959 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:34678.service: Deactivated successfully. Mar 10 01:08:10.255018 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:08:10.258967 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:08:10.266157 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). Mar 10 01:08:10.267797 systemd-logind[1459]: Removed session 24. Mar 10 01:08:10.303446 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:08:10.305958 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:08:10.312603 systemd-logind[1459]: New session 25 of user core. Mar 10 01:08:10.321036 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:08:11.710761 containerd[1469]: time="2026-03-10T01:08:11.710458202Z" level=info msg="StopContainer for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" with timeout 30 (s)" Mar 10 01:08:11.712093 containerd[1469]: time="2026-03-10T01:08:11.712037902Z" level=info msg="Stop container \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" with signal terminated" Mar 10 01:08:11.747840 systemd[1]: cri-containerd-1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607.scope: Deactivated successfully. Mar 10 01:08:11.758190 containerd[1469]: time="2026-03-10T01:08:11.758042658Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:08:11.769111 containerd[1469]: time="2026-03-10T01:08:11.769046651Z" level=info msg="StopContainer for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" with timeout 2 (s)" Mar 10 01:08:11.769608 containerd[1469]: time="2026-03-10T01:08:11.769531462Z" level=info msg="Stop container \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" with signal terminated" Mar 10 01:08:11.781343 systemd-networkd[1394]: lxc_health: Link DOWN Mar 10 01:08:11.781355 systemd-networkd[1394]: lxc_health: Lost carrier Mar 10 01:08:11.786368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607-rootfs.mount: Deactivated successfully. Mar 10 01:08:11.806767 containerd[1469]: time="2026-03-10T01:08:11.806582841Z" level=info msg="shim disconnected" id=1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607 namespace=k8s.io Mar 10 01:08:11.806767 containerd[1469]: time="2026-03-10T01:08:11.806765522Z" level=warning msg="cleaning up after shim disconnected" id=1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607 namespace=k8s.io Mar 10 01:08:11.807099 containerd[1469]: time="2026-03-10T01:08:11.806785387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:11.815887 systemd[1]: cri-containerd-f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e.scope: Deactivated successfully. Mar 10 01:08:11.816806 systemd[1]: cri-containerd-f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e.scope: Consumed 12.825s CPU time. Mar 10 01:08:11.835135 containerd[1469]: time="2026-03-10T01:08:11.834944441Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:08:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:08:11.848996 containerd[1469]: time="2026-03-10T01:08:11.848920670Z" level=info msg="StopContainer for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" returns successfully" Mar 10 01:08:11.850234 containerd[1469]: time="2026-03-10T01:08:11.850032776Z" level=info msg="StopPodSandbox for \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\"" Mar 10 01:08:11.853939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e-rootfs.mount: Deactivated successfully. Mar 10 01:08:11.860177 containerd[1469]: time="2026-03-10T01:08:11.850093178Z" level=info msg="Container to stop \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:08:11.865478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca-shm.mount: Deactivated successfully. Mar 10 01:08:11.874121 systemd[1]: cri-containerd-9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca.scope: Deactivated successfully. Mar 10 01:08:11.877357 containerd[1469]: time="2026-03-10T01:08:11.877203595Z" level=info msg="shim disconnected" id=f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e namespace=k8s.io Mar 10 01:08:11.877357 containerd[1469]: time="2026-03-10T01:08:11.877317827Z" level=warning msg="cleaning up after shim disconnected" id=f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e namespace=k8s.io Mar 10 01:08:11.877357 containerd[1469]: time="2026-03-10T01:08:11.877333206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:11.906811 containerd[1469]: time="2026-03-10T01:08:11.906648575Z" level=info msg="StopContainer for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" returns successfully" Mar 10 01:08:11.907824 containerd[1469]: time="2026-03-10T01:08:11.907784987Z" level=info msg="StopPodSandbox for \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\"" Mar 10 01:08:11.907895 containerd[1469]: time="2026-03-10T01:08:11.907835381Z" level=info msg="Container to stop \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:08:11.907895 containerd[1469]: time="2026-03-10T01:08:11.907856470Z" level=info msg="Container to stop \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:08:11.907895 containerd[1469]: time="2026-03-10T01:08:11.907871678Z" level=info msg="Container to stop \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:08:11.907895 containerd[1469]: time="2026-03-10T01:08:11.907889151Z" level=info msg="Container to stop \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:08:11.908083 containerd[1469]: time="2026-03-10T01:08:11.907906252Z" level=info msg="Container to stop \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:08:11.911507 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d-shm.mount: Deactivated successfully. Mar 10 01:08:11.923879 systemd[1]: cri-containerd-25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d.scope: Deactivated successfully. Mar 10 01:08:11.943398 containerd[1469]: time="2026-03-10T01:08:11.943149205Z" level=info msg="shim disconnected" id=9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca namespace=k8s.io Mar 10 01:08:11.943398 containerd[1469]: time="2026-03-10T01:08:11.943200780Z" level=warning msg="cleaning up after shim disconnected" id=9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca namespace=k8s.io Mar 10 01:08:11.943398 containerd[1469]: time="2026-03-10T01:08:11.943211390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:11.968972 containerd[1469]: time="2026-03-10T01:08:11.966895077Z" level=info msg="shim disconnected" id=25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d namespace=k8s.io Mar 10 01:08:11.968972 containerd[1469]: time="2026-03-10T01:08:11.966996184Z" level=warning msg="cleaning up after shim disconnected" id=25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d namespace=k8s.io Mar 10 01:08:11.968972 containerd[1469]: time="2026-03-10T01:08:11.967014328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:11.968972 containerd[1469]: time="2026-03-10T01:08:11.967727622Z" level=info msg="TearDown network for sandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" successfully" Mar 10 01:08:11.968972 containerd[1469]: time="2026-03-10T01:08:11.967746758Z" level=info msg="StopPodSandbox for \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" returns successfully" Mar 10 01:08:11.995563 containerd[1469]: time="2026-03-10T01:08:11.995433454Z" level=info msg="TearDown network for sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" successfully" Mar 10 01:08:11.995563 containerd[1469]: time="2026-03-10T01:08:11.995480763Z" level=info msg="StopPodSandbox for \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" returns successfully" Mar 10 01:08:12.074956 kubelet[2556]: I0310 01:08:12.074837 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-etc-cni-netd\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.074956 kubelet[2556]: I0310 01:08:12.074935 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f095890f-9d2b-4988-99a9-a592f713f464-clustermesh-secrets\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.074956 kubelet[2556]: I0310 01:08:12.074957 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-net\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.075778 kubelet[2556]: I0310 01:08:12.074973 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-lib-modules\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.075778 kubelet[2556]: I0310 01:08:12.074994 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-run\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.075778 kubelet[2556]: I0310 01:08:12.074995 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.075778 kubelet[2556]: I0310 01:08:12.075008 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cni-path\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.075778 kubelet[2556]: I0310 01:08:12.075073 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drlj9\" (UniqueName: \"kubernetes.io/projected/d2bea28b-0c8d-4628-b260-d01f23d60891-kube-api-access-drlj9\") pod \"d2bea28b-0c8d-4628-b260-d01f23d60891\" (UID: \"d2bea28b-0c8d-4628-b260-d01f23d60891\") " Mar 10 01:08:12.075778 kubelet[2556]: I0310 01:08:12.075098 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-hostproc\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076014 kubelet[2556]: I0310 01:08:12.075122 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-hubble-tls\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076014 kubelet[2556]: I0310 01:08:12.075139 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4knz\" (UniqueName: \"kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-kube-api-access-h4knz\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076014 kubelet[2556]: I0310 01:08:12.075156 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-xtables-lock\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076014 kubelet[2556]: I0310 01:08:12.075186 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-cgroup\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076014 kubelet[2556]: I0310 01:08:12.075034 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cni-path" (OuterVolumeSpecName: "cni-path") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.076014 kubelet[2556]: I0310 01:08:12.075091 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.076173 kubelet[2556]: I0310 01:08:12.075191 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.076173 kubelet[2556]: I0310 01:08:12.075215 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-hostproc" (OuterVolumeSpecName: "hostproc") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.076173 kubelet[2556]: I0310 01:08:12.075219 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2bea28b-0c8d-4628-b260-d01f23d60891-cilium-config-path\") pod \"d2bea28b-0c8d-4628-b260-d01f23d60891\" (UID: \"d2bea28b-0c8d-4628-b260-d01f23d60891\") " Mar 10 01:08:12.076173 kubelet[2556]: I0310 01:08:12.075833 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f095890f-9d2b-4988-99a9-a592f713f464-cilium-config-path\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076173 kubelet[2556]: I0310 01:08:12.075854 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-kernel\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075868 2556 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-bpf-maps\") pod \"f095890f-9d2b-4988-99a9-a592f713f464\" (UID: \"f095890f-9d2b-4988-99a9-a592f713f464\") " Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075900 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075909 2556 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075917 2556 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075925 2556 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075932 2556 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.076379 kubelet[2556]: I0310 01:08:12.075958 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.079713 kubelet[2556]: I0310 01:08:12.077095 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.080074 kubelet[2556]: I0310 01:08:12.079977 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.080074 kubelet[2556]: I0310 01:08:12.080049 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bea28b-0c8d-4628-b260-d01f23d60891-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2bea28b-0c8d-4628-b260-d01f23d60891" (UID: "d2bea28b-0c8d-4628-b260-d01f23d60891"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:08:12.080139 kubelet[2556]: I0310 01:08:12.080101 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.080139 kubelet[2556]: I0310 01:08:12.080132 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:08:12.081881 kubelet[2556]: I0310 01:08:12.081811 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f095890f-9d2b-4988-99a9-a592f713f464-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:08:12.082378 kubelet[2556]: I0310 01:08:12.082352 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bea28b-0c8d-4628-b260-d01f23d60891-kube-api-access-drlj9" (OuterVolumeSpecName: "kube-api-access-drlj9") pod "d2bea28b-0c8d-4628-b260-d01f23d60891" (UID: "d2bea28b-0c8d-4628-b260-d01f23d60891"). InnerVolumeSpecName "kube-api-access-drlj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:08:12.082799 kubelet[2556]: I0310 01:08:12.082644 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f095890f-9d2b-4988-99a9-a592f713f464-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:08:12.082953 kubelet[2556]: I0310 01:08:12.082873 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-kube-api-access-h4knz" (OuterVolumeSpecName: "kube-api-access-h4knz") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "kube-api-access-h4knz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:08:12.084612 kubelet[2556]: I0310 01:08:12.084544 2556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f095890f-9d2b-4988-99a9-a592f713f464" (UID: "f095890f-9d2b-4988-99a9-a592f713f464"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176161 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f095890f-9d2b-4988-99a9-a592f713f464-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176213 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176224 2556 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176233 2556 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f095890f-9d2b-4988-99a9-a592f713f464-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176241 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176251 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drlj9\" (UniqueName: \"kubernetes.io/projected/d2bea28b-0c8d-4628-b260-d01f23d60891-kube-api-access-drlj9\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176260 2556 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176244 kubelet[2556]: I0310 01:08:12.176268 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4knz\" (UniqueName: \"kubernetes.io/projected/f095890f-9d2b-4988-99a9-a592f713f464-kube-api-access-h4knz\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176866 kubelet[2556]: I0310 01:08:12.176308 2556 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176866 kubelet[2556]: I0310 01:08:12.176318 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f095890f-9d2b-4988-99a9-a592f713f464-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.176866 kubelet[2556]: I0310 01:08:12.176326 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2bea28b-0c8d-4628-b260-d01f23d60891-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:08:12.199893 kubelet[2556]: E0310 01:08:12.199830 2556 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:08:12.354817 kubelet[2556]: I0310 01:08:12.354594 2556 scope.go:117] "RemoveContainer" containerID="f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e" Mar 10 01:08:12.357972 containerd[1469]: time="2026-03-10T01:08:12.357323867Z" level=info msg="RemoveContainer for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\"" Mar 10 01:08:12.364440 systemd[1]: Removed slice kubepods-besteffort-podd2bea28b_0c8d_4628_b260_d01f23d60891.slice - libcontainer container kubepods-besteffort-podd2bea28b_0c8d_4628_b260_d01f23d60891.slice. Mar 10 01:08:12.367162 containerd[1469]: time="2026-03-10T01:08:12.366034876Z" level=info msg="RemoveContainer for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" returns successfully" Mar 10 01:08:12.367268 kubelet[2556]: I0310 01:08:12.366481 2556 scope.go:117] "RemoveContainer" containerID="2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f" Mar 10 01:08:12.367885 systemd[1]: Removed slice kubepods-burstable-podf095890f_9d2b_4988_99a9_a592f713f464.slice - libcontainer container kubepods-burstable-podf095890f_9d2b_4988_99a9_a592f713f464.slice. Mar 10 01:08:12.368448 containerd[1469]: time="2026-03-10T01:08:12.368067519Z" level=info msg="RemoveContainer for \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\"" Mar 10 01:08:12.368015 systemd[1]: kubepods-burstable-podf095890f_9d2b_4988_99a9_a592f713f464.slice: Consumed 13.061s CPU time. Mar 10 01:08:12.373565 containerd[1469]: time="2026-03-10T01:08:12.373477809Z" level=info msg="RemoveContainer for \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\" returns successfully" Mar 10 01:08:12.373850 kubelet[2556]: I0310 01:08:12.373787 2556 scope.go:117] "RemoveContainer" containerID="b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4" Mar 10 01:08:12.375332 containerd[1469]: time="2026-03-10T01:08:12.375241194Z" level=info msg="RemoveContainer for \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\"" Mar 10 01:08:12.380348 containerd[1469]: time="2026-03-10T01:08:12.380314289Z" level=info msg="RemoveContainer for \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\" returns successfully" Mar 10 01:08:12.383729 kubelet[2556]: I0310 01:08:12.383517 2556 scope.go:117] "RemoveContainer" containerID="2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a" Mar 10 01:08:12.391959 containerd[1469]: time="2026-03-10T01:08:12.391446121Z" level=info msg="RemoveContainer for \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\"" Mar 10 01:08:12.398201 containerd[1469]: time="2026-03-10T01:08:12.398072700Z" level=info msg="RemoveContainer for \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\" returns successfully" Mar 10 01:08:12.398539 kubelet[2556]: I0310 01:08:12.398426 2556 scope.go:117] "RemoveContainer" containerID="aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34" Mar 10 01:08:12.400614 containerd[1469]: time="2026-03-10T01:08:12.400158265Z" level=info msg="RemoveContainer for \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\"" Mar 10 01:08:12.405135 containerd[1469]: time="2026-03-10T01:08:12.405035472Z" level=info msg="RemoveContainer for \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\" returns successfully" Mar 10 01:08:12.405903 kubelet[2556]: I0310 01:08:12.405833 2556 scope.go:117] "RemoveContainer" containerID="f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e" Mar 10 01:08:12.414611 containerd[1469]: time="2026-03-10T01:08:12.414389634Z" level=error msg="ContainerStatus for \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\": not found" Mar 10 01:08:12.426481 kubelet[2556]: E0310 01:08:12.426371 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\": not found" containerID="f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e" Mar 10 01:08:12.426597 kubelet[2556]: I0310 01:08:12.426462 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e"} err="failed to get container status \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f94a9e1660fae30f2bae61c49155f5a8a1e9b58045f52c82fa54c03c1045bc0e\": not found" Mar 10 01:08:12.426597 kubelet[2556]: I0310 01:08:12.426528 2556 scope.go:117] "RemoveContainer" containerID="2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f" Mar 10 01:08:12.427083 containerd[1469]: time="2026-03-10T01:08:12.427005606Z" level=error msg="ContainerStatus for \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\": not found" Mar 10 01:08:12.427268 kubelet[2556]: E0310 01:08:12.427235 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\": not found" containerID="2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f" Mar 10 01:08:12.427366 kubelet[2556]: I0310 01:08:12.427263 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f"} err="failed to get container status \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2edd1dfb1abf7b340a959812c3bcff54a2a97e02e3dfbcfc97fbbaeb10c6206f\": not found" Mar 10 01:08:12.427366 kubelet[2556]: I0310 01:08:12.427336 2556 scope.go:117] "RemoveContainer" containerID="b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4" Mar 10 01:08:12.427786 containerd[1469]: time="2026-03-10T01:08:12.427749178Z" level=error msg="ContainerStatus for \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\": not found" Mar 10 01:08:12.428161 kubelet[2556]: E0310 01:08:12.427990 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\": not found" containerID="b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4" Mar 10 01:08:12.428161 kubelet[2556]: I0310 01:08:12.428023 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4"} err="failed to get container status \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b050c21826c4c64d608e73fc313ff6e95defc7836a88fb382442c7a9c0b371a4\": not found" Mar 10 01:08:12.428161 kubelet[2556]: I0310 01:08:12.428040 2556 scope.go:117] "RemoveContainer" containerID="2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a" Mar 10 01:08:12.428375 containerd[1469]: time="2026-03-10T01:08:12.428321532Z" level=error msg="ContainerStatus for \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\": not found" Mar 10 01:08:12.428509 kubelet[2556]: E0310 01:08:12.428429 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\": not found" containerID="2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a" Mar 10 01:08:12.428509 kubelet[2556]: I0310 01:08:12.428456 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a"} err="failed to get container status \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a2dfcb9d740742ae0969b60b9102daa40f119ae591da3341bed7cd42173135a\": not found" Mar 10 01:08:12.428509 kubelet[2556]: I0310 01:08:12.428475 2556 scope.go:117] "RemoveContainer" containerID="aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34" Mar 10 01:08:12.428871 containerd[1469]: time="2026-03-10T01:08:12.428814408Z" level=error msg="ContainerStatus for \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\": not found" Mar 10 01:08:12.429025 kubelet[2556]: E0310 01:08:12.428976 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\": not found" containerID="aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34" Mar 10 01:08:12.429072 kubelet[2556]: I0310 01:08:12.429014 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34"} err="failed to get container status \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\": rpc error: code = NotFound desc = an error occurred when try to find container \"aed609ea0cd217dc6e58fa39461d77421ecd1a4c2cf6499003bb4c0e84253e34\": not found" Mar 10 01:08:12.429072 kubelet[2556]: I0310 01:08:12.429038 2556 scope.go:117] "RemoveContainer" containerID="1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607" Mar 10 01:08:12.430747 containerd[1469]: time="2026-03-10T01:08:12.430553972Z" level=info msg="RemoveContainer for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\"" Mar 10 01:08:12.434023 containerd[1469]: time="2026-03-10T01:08:12.433966887Z" level=info msg="RemoveContainer for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" returns successfully" Mar 10 01:08:12.434353 kubelet[2556]: I0310 01:08:12.434306 2556 scope.go:117] "RemoveContainer" containerID="1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607" Mar 10 01:08:12.434598 containerd[1469]: time="2026-03-10T01:08:12.434562784Z" level=error msg="ContainerStatus for \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\": not found" Mar 10 01:08:12.434918 kubelet[2556]: E0310 01:08:12.434856 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\": not found" containerID="1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607" Mar 10 01:08:12.434967 kubelet[2556]: I0310 01:08:12.434914 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607"} err="failed to get container status \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a5537ded45275fb4bbc6ca24eb96e4624ea4fc5736dbe8825a432dadb678607\": not found" Mar 10 01:08:12.729042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca-rootfs.mount: Deactivated successfully. Mar 10 01:08:12.729187 systemd[1]: var-lib-kubelet-pods-d2bea28b\x2d0c8d\x2d4628\x2db260\x2dd01f23d60891-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrlj9.mount: Deactivated successfully. Mar 10 01:08:12.729270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d-rootfs.mount: Deactivated successfully. Mar 10 01:08:12.729394 systemd[1]: var-lib-kubelet-pods-f095890f\x2d9d2b\x2d4988\x2d99a9\x2da592f713f464-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4knz.mount: Deactivated successfully. Mar 10 01:08:12.729526 systemd[1]: var-lib-kubelet-pods-f095890f\x2d9d2b\x2d4988\x2d99a9\x2da592f713f464-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 10 01:08:12.729614 systemd[1]: var-lib-kubelet-pods-f095890f\x2d9d2b\x2d4988\x2d99a9\x2da592f713f464-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 10 01:08:13.660552 sshd[4208]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:13.669915 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:34688.service: Deactivated successfully. Mar 10 01:08:13.672651 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:08:13.674933 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:08:13.684257 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:34692.service - OpenSSH per-connection server daemon (10.0.0.1:34692). Mar 10 01:08:13.685798 systemd-logind[1459]: Removed session 25. Mar 10 01:08:13.729241 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 34692 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:08:13.731614 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:08:13.737760 systemd-logind[1459]: New session 26 of user core. Mar 10 01:08:13.747909 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:08:14.024060 kubelet[2556]: I0310 01:08:14.023899 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bea28b-0c8d-4628-b260-d01f23d60891" path="/var/lib/kubelet/pods/d2bea28b-0c8d-4628-b260-d01f23d60891/volumes" Mar 10 01:08:14.024731 kubelet[2556]: I0310 01:08:14.024638 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f095890f-9d2b-4988-99a9-a592f713f464" path="/var/lib/kubelet/pods/f095890f-9d2b-4988-99a9-a592f713f464/volumes" Mar 10 01:08:14.446608 sshd[4374]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:14.458008 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:34692.service: Deactivated successfully. Mar 10 01:08:14.462935 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:08:14.467149 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:08:14.476391 systemd[1]: Started sshd@26-10.0.0.112:22-10.0.0.1:34706.service - OpenSSH per-connection server daemon (10.0.0.1:34706). Mar 10 01:08:14.478175 kubelet[2556]: E0310 01:08:14.477452 2556 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Mar 10 01:08:14.478175 kubelet[2556]: E0310 01:08:14.477556 2556 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Mar 10 01:08:14.478175 kubelet[2556]: E0310 01:08:14.477606 2556 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Mar 10 01:08:14.478175 kubelet[2556]: E0310 01:08:14.477650 2556 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Mar 10 01:08:14.479636 kubelet[2556]: E0310 01:08:14.479473 2556 status_manager.go:1018] "Failed to get status for pod" err="pods \"cilium-22snk\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="7315c894-84d6-487e-b2fb-58e39c15ed18" pod="kube-system/cilium-22snk" Mar 10 01:08:14.483254 systemd-logind[1459]: Removed session 26. Mar 10 01:08:14.499358 systemd[1]: Created slice kubepods-burstable-pod7315c894_84d6_487e_b2fb_58e39c15ed18.slice - libcontainer container kubepods-burstable-pod7315c894_84d6_487e_b2fb_58e39c15ed18.slice. Mar 10 01:08:14.530103 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 34706 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:08:14.532183 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:08:14.537941 systemd-logind[1459]: New session 27 of user core. Mar 10 01:08:14.546888 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:08:14.594245 kubelet[2556]: I0310 01:08:14.593936 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-lib-modules\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594245 kubelet[2556]: I0310 01:08:14.593993 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-host-proc-sys-net\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594245 kubelet[2556]: I0310 01:08:14.594020 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-xtables-lock\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594245 kubelet[2556]: I0310 01:08:14.594041 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-ipsec-secrets\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594245 kubelet[2556]: I0310 01:08:14.594071 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-bpf-maps\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594245 kubelet[2556]: I0310 01:08:14.594094 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-etc-cni-netd\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594786 kubelet[2556]: I0310 01:08:14.594115 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-cgroup\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594786 kubelet[2556]: I0310 01:08:14.594137 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-host-proc-sys-kernel\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594786 kubelet[2556]: I0310 01:08:14.594171 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-run\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594786 kubelet[2556]: I0310 01:08:14.594218 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7315c894-84d6-487e-b2fb-58e39c15ed18-clustermesh-secrets\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594786 kubelet[2556]: I0310 01:08:14.594345 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7315c894-84d6-487e-b2fb-58e39c15ed18-hubble-tls\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.594786 kubelet[2556]: I0310 01:08:14.594424 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq5x2\" (UniqueName: \"kubernetes.io/projected/7315c894-84d6-487e-b2fb-58e39c15ed18-kube-api-access-pq5x2\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.595017 kubelet[2556]: I0310 01:08:14.594446 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-hostproc\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.595017 kubelet[2556]: I0310 01:08:14.594539 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7315c894-84d6-487e-b2fb-58e39c15ed18-cni-path\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.595017 kubelet[2556]: I0310 01:08:14.594575 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-config-path\") pod \"cilium-22snk\" (UID: \"7315c894-84d6-487e-b2fb-58e39c15ed18\") " pod="kube-system/cilium-22snk" Mar 10 01:08:14.602607 sshd[4387]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:14.617602 systemd[1]: sshd@26-10.0.0.112:22-10.0.0.1:34706.service: Deactivated successfully. Mar 10 01:08:14.620438 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:08:14.622951 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:08:14.634243 systemd[1]: Started sshd@27-10.0.0.112:22-10.0.0.1:34714.service - OpenSSH per-connection server daemon (10.0.0.1:34714). Mar 10 01:08:14.635915 systemd-logind[1459]: Removed session 27. Mar 10 01:08:14.671040 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 34714 ssh2: RSA SHA256:+9IJo93h7v+sMDVPkoY71zUhFJfGSCbesSXGLeDbYVk Mar 10 01:08:14.673420 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:08:14.680255 systemd-logind[1459]: New session 28 of user core. Mar 10 01:08:14.689950 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:08:14.846742 kubelet[2556]: I0310 01:08:14.846464 2556 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T01:08:14Z","lastTransitionTime":"2026-03-10T01:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 10 01:08:15.697836 kubelet[2556]: E0310 01:08:15.697748 2556 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 10 01:08:15.697836 kubelet[2556]: E0310 01:08:15.697819 2556 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-22snk: failed to sync secret cache: timed out waiting for the condition Mar 10 01:08:15.698565 kubelet[2556]: E0310 01:08:15.697935 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7315c894-84d6-487e-b2fb-58e39c15ed18-hubble-tls podName:7315c894-84d6-487e-b2fb-58e39c15ed18 nodeName:}" failed. No retries permitted until 2026-03-10 01:08:16.197912961 +0000 UTC m=+104.362520483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/7315c894-84d6-487e-b2fb-58e39c15ed18-hubble-tls") pod "cilium-22snk" (UID: "7315c894-84d6-487e-b2fb-58e39c15ed18") : failed to sync secret cache: timed out waiting for the condition Mar 10 01:08:15.698565 kubelet[2556]: E0310 01:08:15.697748 2556 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 10 01:08:15.698565 kubelet[2556]: E0310 01:08:15.697768 2556 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 10 01:08:15.698565 kubelet[2556]: E0310 01:08:15.698146 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-ipsec-secrets podName:7315c894-84d6-487e-b2fb-58e39c15ed18 nodeName:}" failed. No retries permitted until 2026-03-10 01:08:16.198117372 +0000 UTC m=+104.362724895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-ipsec-secrets") pod "cilium-22snk" (UID: "7315c894-84d6-487e-b2fb-58e39c15ed18") : failed to sync secret cache: timed out waiting for the condition Mar 10 01:08:15.698955 kubelet[2556]: E0310 01:08:15.698174 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-config-path podName:7315c894-84d6-487e-b2fb-58e39c15ed18 nodeName:}" failed. No retries permitted until 2026-03-10 01:08:16.198157895 +0000 UTC m=+104.362765418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/7315c894-84d6-487e-b2fb-58e39c15ed18-cilium-config-path") pod "cilium-22snk" (UID: "7315c894-84d6-487e-b2fb-58e39c15ed18") : failed to sync configmap cache: timed out waiting for the condition Mar 10 01:08:16.020760 kubelet[2556]: E0310 01:08:16.020511 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-9tcqh" podUID="72299766-4eb7-4fec-824f-b534c7b4b268" Mar 10 01:08:16.307535 kubelet[2556]: E0310 01:08:16.307258 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:16.308274 containerd[1469]: time="2026-03-10T01:08:16.308209009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22snk,Uid:7315c894-84d6-487e-b2fb-58e39c15ed18,Namespace:kube-system,Attempt:0,}" Mar 10 01:08:16.340245 containerd[1469]: time="2026-03-10T01:08:16.339964696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:08:16.340245 containerd[1469]: time="2026-03-10T01:08:16.340062979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:08:16.340245 containerd[1469]: time="2026-03-10T01:08:16.340079409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:08:16.340245 containerd[1469]: time="2026-03-10T01:08:16.340194182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:08:16.372907 systemd[1]: Started cri-containerd-e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d.scope - libcontainer container e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d. Mar 10 01:08:16.404378 containerd[1469]: time="2026-03-10T01:08:16.404321973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22snk,Uid:7315c894-84d6-487e-b2fb-58e39c15ed18,Namespace:kube-system,Attempt:0,} returns sandbox id \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\"" Mar 10 01:08:16.407570 kubelet[2556]: E0310 01:08:16.407515 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:16.413965 containerd[1469]: time="2026-03-10T01:08:16.413850491Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:08:16.439718 containerd[1469]: time="2026-03-10T01:08:16.439581164Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce\"" Mar 10 01:08:16.441764 containerd[1469]: time="2026-03-10T01:08:16.440386974Z" level=info msg="StartContainer for \"d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce\"" Mar 10 01:08:16.482924 systemd[1]: Started cri-containerd-d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce.scope - libcontainer container d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce. Mar 10 01:08:16.518620 containerd[1469]: time="2026-03-10T01:08:16.518463445Z" level=info msg="StartContainer for \"d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce\" returns successfully" Mar 10 01:08:16.533612 systemd[1]: cri-containerd-d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce.scope: Deactivated successfully. Mar 10 01:08:16.581599 containerd[1469]: time="2026-03-10T01:08:16.581251382Z" level=info msg="shim disconnected" id=d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce namespace=k8s.io Mar 10 01:08:16.581599 containerd[1469]: time="2026-03-10T01:08:16.581393776Z" level=warning msg="cleaning up after shim disconnected" id=d0a97b1af0a1fb4e030d9561c820b1675b302afe0b36e8de2c907be08937b4ce namespace=k8s.io Mar 10 01:08:16.581599 containerd[1469]: time="2026-03-10T01:08:16.581408593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:17.201744 kubelet[2556]: E0310 01:08:17.201607 2556 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:08:17.214077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267256808.mount: Deactivated successfully. Mar 10 01:08:17.380230 kubelet[2556]: E0310 01:08:17.380195 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:17.386038 containerd[1469]: time="2026-03-10T01:08:17.385910194Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:08:17.407355 containerd[1469]: time="2026-03-10T01:08:17.407215564Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2\"" Mar 10 01:08:17.408494 containerd[1469]: time="2026-03-10T01:08:17.408389500Z" level=info msg="StartContainer for \"6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2\"" Mar 10 01:08:17.454915 systemd[1]: Started cri-containerd-6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2.scope - libcontainer container 6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2. Mar 10 01:08:17.487814 containerd[1469]: time="2026-03-10T01:08:17.487757443Z" level=info msg="StartContainer for \"6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2\" returns successfully" Mar 10 01:08:17.497946 systemd[1]: cri-containerd-6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2.scope: Deactivated successfully. Mar 10 01:08:17.532716 containerd[1469]: time="2026-03-10T01:08:17.532598307Z" level=info msg="shim disconnected" id=6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2 namespace=k8s.io Mar 10 01:08:17.532716 containerd[1469]: time="2026-03-10T01:08:17.532650544Z" level=warning msg="cleaning up after shim disconnected" id=6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2 namespace=k8s.io Mar 10 01:08:17.532716 containerd[1469]: time="2026-03-10T01:08:17.532704233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:18.022103 kubelet[2556]: E0310 01:08:18.021909 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-9tcqh" podUID="72299766-4eb7-4fec-824f-b534c7b4b268" Mar 10 01:08:18.214943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ac966532e8975acd9328a6987f62035272e236b77f9fdecbf8ff27a36119aa2-rootfs.mount: Deactivated successfully. Mar 10 01:08:18.385547 kubelet[2556]: E0310 01:08:18.385502 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:18.395379 containerd[1469]: time="2026-03-10T01:08:18.395260028Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:08:18.433652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913497087.mount: Deactivated successfully. Mar 10 01:08:18.436610 containerd[1469]: time="2026-03-10T01:08:18.436500759Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08\"" Mar 10 01:08:18.437752 containerd[1469]: time="2026-03-10T01:08:18.437592796Z" level=info msg="StartContainer for \"ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08\"" Mar 10 01:08:18.486876 systemd[1]: Started cri-containerd-ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08.scope - libcontainer container ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08. Mar 10 01:08:18.518965 containerd[1469]: time="2026-03-10T01:08:18.518916654Z" level=info msg="StartContainer for \"ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08\" returns successfully" Mar 10 01:08:18.522128 systemd[1]: cri-containerd-ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08.scope: Deactivated successfully. Mar 10 01:08:18.551103 containerd[1469]: time="2026-03-10T01:08:18.551027863Z" level=info msg="shim disconnected" id=ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08 namespace=k8s.io Mar 10 01:08:18.551103 containerd[1469]: time="2026-03-10T01:08:18.551087875Z" level=warning msg="cleaning up after shim disconnected" id=ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08 namespace=k8s.io Mar 10 01:08:18.551103 containerd[1469]: time="2026-03-10T01:08:18.551097662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:19.214325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea51536a7ea96c09f8320ce2f07fdc10ef223b59ce2c83e0f9d2789c64fe7b08-rootfs.mount: Deactivated successfully. Mar 10 01:08:19.390773 kubelet[2556]: E0310 01:08:19.390645 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:19.397777 containerd[1469]: time="2026-03-10T01:08:19.395356030Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:08:19.414789 containerd[1469]: time="2026-03-10T01:08:19.414644396Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860\"" Mar 10 01:08:19.415882 containerd[1469]: time="2026-03-10T01:08:19.415765981Z" level=info msg="StartContainer for \"67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860\"" Mar 10 01:08:19.463875 systemd[1]: Started cri-containerd-67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860.scope - libcontainer container 67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860. Mar 10 01:08:19.493427 systemd[1]: cri-containerd-67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860.scope: Deactivated successfully. Mar 10 01:08:19.495277 containerd[1469]: time="2026-03-10T01:08:19.495223916Z" level=info msg="StartContainer for \"67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860\" returns successfully" Mar 10 01:08:19.525596 containerd[1469]: time="2026-03-10T01:08:19.525466962Z" level=info msg="shim disconnected" id=67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860 namespace=k8s.io Mar 10 01:08:19.525596 containerd[1469]: time="2026-03-10T01:08:19.525542473Z" level=warning msg="cleaning up after shim disconnected" id=67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860 namespace=k8s.io Mar 10 01:08:19.525596 containerd[1469]: time="2026-03-10T01:08:19.525552652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:20.022725 kubelet[2556]: E0310 01:08:20.021258 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-9tcqh" podUID="72299766-4eb7-4fec-824f-b534c7b4b268" Mar 10 01:08:20.215189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67cf1230ec90fcb94691b7962afcdd3f4ba45ef9319c1b135e72b9d8f39c6860-rootfs.mount: Deactivated successfully. Mar 10 01:08:20.395827 kubelet[2556]: E0310 01:08:20.395785 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:20.403652 containerd[1469]: time="2026-03-10T01:08:20.403453450Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:08:20.422373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832383975.mount: Deactivated successfully. Mar 10 01:08:20.422645 containerd[1469]: time="2026-03-10T01:08:20.422564785Z" level=info msg="CreateContainer within sandbox \"e779bc4fd17235128db9d43728d6bfad4fca70a408dc023b8b3c94d6ee2d270d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95354f0503b245a74f15141912e75a004791c964dbfa5f01e31c2000697969b1\"" Mar 10 01:08:20.423754 containerd[1469]: time="2026-03-10T01:08:20.423270808Z" level=info msg="StartContainer for \"95354f0503b245a74f15141912e75a004791c964dbfa5f01e31c2000697969b1\"" Mar 10 01:08:20.468919 systemd[1]: Started cri-containerd-95354f0503b245a74f15141912e75a004791c964dbfa5f01e31c2000697969b1.scope - libcontainer container 95354f0503b245a74f15141912e75a004791c964dbfa5f01e31c2000697969b1. Mar 10 01:08:20.503656 containerd[1469]: time="2026-03-10T01:08:20.503601386Z" level=info msg="StartContainer for \"95354f0503b245a74f15141912e75a004791c964dbfa5f01e31c2000697969b1\" returns successfully" Mar 10 01:08:21.014752 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 10 01:08:21.401716 kubelet[2556]: E0310 01:08:21.401564 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:22.020621 kubelet[2556]: E0310 01:08:22.020518 2556 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-9tcqh" podUID="72299766-4eb7-4fec-824f-b534c7b4b268" Mar 10 01:08:22.404928 kubelet[2556]: E0310 01:08:22.404741 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:23.007043 systemd[1]: run-containerd-runc-k8s.io-95354f0503b245a74f15141912e75a004791c964dbfa5f01e31c2000697969b1-runc.hwQ7vF.mount: Deactivated successfully. Mar 10 01:08:24.023532 kubelet[2556]: E0310 01:08:24.023416 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:24.659155 systemd-networkd[1394]: lxc_health: Link UP Mar 10 01:08:24.674210 systemd-networkd[1394]: lxc_health: Gained carrier Mar 10 01:08:26.136840 systemd-networkd[1394]: lxc_health: Gained IPv6LL Mar 10 01:08:26.351487 kubelet[2556]: E0310 01:08:26.340185 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:26.455453 kubelet[2556]: E0310 01:08:26.427988 2556 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:26.542592 kubelet[2556]: I0310 01:08:26.540392 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-22snk" podStartSLOduration=12.540377247 podStartE2EDuration="12.540377247s" podCreationTimestamp="2026-03-10 01:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:08:21.415593924 +0000 UTC m=+109.580201447" watchObservedRunningTime="2026-03-10 01:08:26.540377247 +0000 UTC m=+114.704984770" Mar 10 01:08:32.878519 containerd[1469]: time="2026-03-10T01:08:32.878221277Z" level=info msg="StopPodSandbox for \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\"" Mar 10 01:08:32.878519 containerd[1469]: time="2026-03-10T01:08:32.878464308Z" level=info msg="TearDown network for sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" successfully" Mar 10 01:08:32.878519 containerd[1469]: time="2026-03-10T01:08:32.878489776Z" level=info msg="StopPodSandbox for \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" returns successfully" Mar 10 01:08:32.884958 containerd[1469]: time="2026-03-10T01:08:32.883472115Z" level=info msg="RemovePodSandbox for \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\"" Mar 10 01:08:32.884958 containerd[1469]: time="2026-03-10T01:08:32.883539901Z" level=info msg="Forcibly stopping sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\"" Mar 10 01:08:32.884958 containerd[1469]: time="2026-03-10T01:08:32.883616454Z" level=info msg="TearDown network for sandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" successfully" Mar 10 01:08:33.184749 containerd[1469]: time="2026-03-10T01:08:33.171307004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:08:33.184749 containerd[1469]: time="2026-03-10T01:08:33.171824738Z" level=info msg="RemovePodSandbox \"25829e490cc2c2f3dbfd71c1765ee2ed3a606f8e25688b617761ab4e6c60f80d\" returns successfully" Mar 10 01:08:35.370568 containerd[1469]: time="2026-03-10T01:08:35.370370759Z" level=info msg="StopPodSandbox for \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\"" Mar 10 01:08:35.373947 containerd[1469]: time="2026-03-10T01:08:35.373922169Z" level=info msg="TearDown network for sandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" successfully" Mar 10 01:08:35.375982 containerd[1469]: time="2026-03-10T01:08:35.375956475Z" level=info msg="StopPodSandbox for \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" returns successfully" Mar 10 01:08:35.377109 containerd[1469]: time="2026-03-10T01:08:35.377085468Z" level=info msg="RemovePodSandbox for \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\"" Mar 10 01:08:35.377230 containerd[1469]: time="2026-03-10T01:08:35.377205782Z" level=info msg="Forcibly stopping sandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\"" Mar 10 01:08:35.377470 containerd[1469]: time="2026-03-10T01:08:35.377447692Z" level=info msg="TearDown network for sandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" successfully" Mar 10 01:08:35.418596 containerd[1469]: time="2026-03-10T01:08:35.418310816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:08:35.418596 containerd[1469]: time="2026-03-10T01:08:35.418545121Z" level=info msg="RemovePodSandbox \"9213c6628a05be8b09a187e448a25ebf60ea766279d0ad80646b885b80bbafca\" returns successfully" Mar 10 01:08:35.775262 sshd[4395]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:35.787056 systemd[1]: sshd@27-10.0.0.112:22-10.0.0.1:34714.service: Deactivated successfully. Mar 10 01:08:35.793126 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:08:35.794022 systemd[1]: session-28.scope: Consumed 1.170s CPU time. Mar 10 01:08:35.796113 systemd-logind[1459]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:08:35.800200 systemd-logind[1459]: Removed session 28.