Mar 2 13:04:37.321907 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:04:37.322071 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:04:37.322085 kernel: BIOS-provided physical RAM map: Mar 2 13:04:37.322091 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 13:04:37.322097 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 13:04:37.322102 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 13:04:37.322109 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 13:04:37.322115 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 13:04:37.322120 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 2 13:04:37.322126 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 2 13:04:37.322135 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 2 13:04:37.322140 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 2 13:04:37.322164 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 2 13:04:37.322171 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 2 13:04:37.322195 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 2 13:04:37.322202 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 13:04:37.322212 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 2 13:04:37.322218 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 2 13:04:37.322224 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 13:04:37.322230 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:04:37.322236 kernel: NX (Execute Disable) protection: active Mar 2 13:04:37.322242 kernel: APIC: Static calls initialized Mar 2 13:04:37.322248 kernel: efi: EFI v2.7 by EDK II Mar 2 13:04:37.322255 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 2 13:04:37.322261 kernel: SMBIOS 2.8 present. Mar 2 13:04:37.322267 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 2 13:04:37.322273 kernel: Hypervisor detected: KVM Mar 2 13:04:37.322282 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:04:37.322288 kernel: kvm-clock: using sched offset of 11357683686 cycles Mar 2 13:04:37.322295 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:04:37.322301 kernel: tsc: Detected 2445.424 MHz processor Mar 2 13:04:37.322308 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:04:37.322314 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:04:37.322320 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 2 13:04:37.322327 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 13:04:37.322333 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:04:37.322342 kernel: Using GB pages for direct mapping Mar 2 13:04:37.322349 kernel: Secure boot disabled Mar 2 13:04:37.322355 kernel: ACPI: Early table checksum verification disabled Mar 2 13:04:37.322361 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 13:04:37.322372 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 13:04:37.322378 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:04:37.322385 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:04:37.322395 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 13:04:37.322420 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:04:37.322427 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:04:37.322434 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:04:37.322440 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:04:37.322446 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 13:04:37.322453 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 13:04:37.322463 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 13:04:37.322470 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 13:04:37.322476 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 13:04:37.322482 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 13:04:37.322489 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 13:04:37.322495 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 13:04:37.322502 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 13:04:37.322508 kernel: No NUMA configuration found Mar 2 13:04:37.322532 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 2 13:04:37.322542 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 2 13:04:37.322549 kernel: Zone ranges: Mar 2 13:04:37.322555 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:04:37.322562 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 2 13:04:37.322568 kernel: Normal empty Mar 2 13:04:37.322575 kernel: Movable zone start for each node Mar 2 13:04:37.322581 kernel: Early memory node ranges Mar 2 13:04:37.322587 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 13:04:37.322594 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 13:04:37.322600 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 13:04:37.322610 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 2 13:04:37.322616 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 2 13:04:37.322622 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 2 13:04:37.322644 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 2 13:04:37.322651 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:04:37.322658 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 13:04:37.322664 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 13:04:37.322671 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:04:37.322677 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 2 13:04:37.322687 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 2 13:04:37.322693 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 2 13:04:37.322700 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:04:37.322706 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:04:37.322713 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:04:37.322719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:04:37.322726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:04:37.322732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:04:37.322739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:04:37.322748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:04:37.322755 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:04:37.322761 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:04:37.322767 kernel: TSC deadline timer available Mar 2 13:04:37.322774 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:04:37.322780 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:04:37.322787 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:04:37.322793 kernel: kvm-guest: setup PV sched yield Mar 2 13:04:37.322799 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 2 13:04:37.322809 kernel: Booting paravirtualized kernel on KVM Mar 2 13:04:37.322815 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:04:37.322822 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:04:37.322828 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:04:37.322835 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:04:37.322841 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:04:37.322848 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:04:37.322854 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:04:37.322862 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:04:37.322888 kernel: random: crng init done Mar 2 13:04:37.322895 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:04:37.322902 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:04:37.322908 kernel: Fallback order for Node 0: 0 Mar 2 13:04:37.322915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 2 13:04:37.322965 kernel: Policy zone: DMA32 Mar 2 13:04:37.323007 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:04:37.323014 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 2 13:04:37.323027 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:04:37.323034 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:04:37.323040 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:04:37.323047 kernel: Dynamic Preempt: voluntary Mar 2 13:04:37.323054 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:04:37.323070 kernel: rcu: RCU event tracing is enabled. Mar 2 13:04:37.323081 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:04:37.323088 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:04:37.323095 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:04:37.323102 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:04:37.323109 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:04:37.323115 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:04:37.323125 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:04:37.323132 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:04:37.323139 kernel: Console: colour dummy device 80x25 Mar 2 13:04:37.323146 kernel: printk: console [ttyS0] enabled Mar 2 13:04:37.323173 kernel: ACPI: Core revision 20230628 Mar 2 13:04:37.323185 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:04:37.323192 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:04:37.323198 kernel: x2apic enabled Mar 2 13:04:37.323205 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:04:37.323212 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:04:37.323219 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:04:37.323226 kernel: kvm-guest: setup PV IPIs Mar 2 13:04:37.323232 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:04:37.323239 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:04:37.323249 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 2 13:04:37.323256 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:04:37.323263 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:04:37.323270 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:04:37.323276 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:04:37.323283 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:04:37.323290 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:04:37.323297 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:04:37.323304 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:04:37.323377 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:04:37.323387 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:04:37.323394 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:04:37.323420 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:04:37.323427 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:04:37.323434 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:04:37.323441 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:04:37.323448 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:04:37.323460 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:04:37.323467 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:04:37.323474 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:04:37.323480 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:04:37.323487 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:04:37.323494 kernel: landlock: Up and running. Mar 2 13:04:37.323501 kernel: SELinux: Initializing. Mar 2 13:04:37.323508 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:04:37.323514 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:04:37.323524 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:04:37.323531 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:04:37.323538 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:04:37.323545 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:04:37.323552 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:04:37.323559 kernel: signal: max sigframe size: 1776 Mar 2 13:04:37.323565 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:04:37.323572 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:04:37.323579 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:04:37.323589 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:04:37.323596 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:04:37.323603 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:04:37.323610 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:04:37.323616 kernel: smpboot: Max logical packages: 1 Mar 2 13:04:37.323623 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 2 13:04:37.323630 kernel: devtmpfs: initialized Mar 2 13:04:37.323637 kernel: x86/mm: Memory block size: 128MB Mar 2 13:04:37.323644 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 13:04:37.323653 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 13:04:37.323660 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 2 13:04:37.323667 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 13:04:37.323674 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 13:04:37.323681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:04:37.323688 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:04:37.323694 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:04:37.323701 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:04:37.323708 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:04:37.323718 kernel: audit: type=2000 audit(1772456671.964:1): state=initialized audit_enabled=0 res=1 Mar 2 13:04:37.323725 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:04:37.323732 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:04:37.323738 kernel: cpuidle: using governor menu Mar 2 13:04:37.323745 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:04:37.323752 kernel: dca service started, version 1.12.1 Mar 2 13:04:37.323759 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:04:37.323766 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:04:37.323773 kernel: PCI: Using configuration type 1 for base access Mar 2 13:04:37.323783 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:04:37.323789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:04:37.323796 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:04:37.323803 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:04:37.323810 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:04:37.323816 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:04:37.323823 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:04:37.323830 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:04:37.323837 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:04:37.323846 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:04:37.323853 kernel: ACPI: Interpreter enabled Mar 2 13:04:37.323860 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:04:37.323867 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:04:37.323874 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:04:37.323880 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:04:37.323887 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:04:37.323894 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:04:37.324446 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:04:37.324625 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:04:37.324779 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:04:37.324789 kernel: PCI host bridge to bus 0000:00 Mar 2 13:04:37.325166 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:04:37.325317 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:04:37.325455 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:04:37.325599 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:04:37.325734 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:04:37.325867 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 2 13:04:37.326112 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:04:37.326402 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:04:37.326633 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:04:37.326796 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 2 13:04:37.327082 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 2 13:04:37.327236 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 2 13:04:37.327382 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 2 13:04:37.327563 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:04:37.327786 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:04:37.328681 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 2 13:04:37.328846 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 2 13:04:37.329084 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 2 13:04:37.329346 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:04:37.329499 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 2 13:04:37.329647 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 2 13:04:37.329793 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 2 13:04:37.330086 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:04:37.330252 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 2 13:04:37.330908 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 2 13:04:37.331218 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 2 13:04:37.331417 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 2 13:04:37.331671 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:04:37.332481 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:04:37.332731 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:04:37.333033 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 2 13:04:37.333202 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 2 13:04:37.333521 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:04:37.333679 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 2 13:04:37.333689 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:04:37.333697 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:04:37.333704 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:04:37.333717 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:04:37.333724 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:04:37.333731 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:04:37.333737 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:04:37.333744 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:04:37.333751 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:04:37.333758 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:04:37.333765 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:04:37.333772 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:04:37.333782 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:04:37.333789 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:04:37.333796 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:04:37.333803 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:04:37.333810 kernel: iommu: Default domain type: Translated Mar 2 13:04:37.333816 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:04:37.333823 kernel: efivars: Registered efivars operations Mar 2 13:04:37.333830 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:04:37.333837 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:04:37.333848 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 13:04:37.333854 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 2 13:04:37.333861 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 2 13:04:37.333868 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 2 13:04:37.334109 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:04:37.334261 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:04:37.334407 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:04:37.335299 kernel: vgaarb: loaded Mar 2 13:04:37.335358 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:04:37.335410 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:04:37.335417 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:04:37.335424 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:04:37.335432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:04:37.335439 kernel: pnp: PnP ACPI init Mar 2 13:04:37.336006 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:04:37.336060 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:04:37.336075 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:04:37.336141 kernel: NET: Registered PF_INET protocol family Mar 2 13:04:37.336182 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:04:37.336220 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:04:37.336235 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:04:37.336247 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:04:37.336259 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:04:37.336270 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:04:37.336282 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:04:37.336293 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:04:37.336310 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:04:37.336321 kernel: NET: Registered PF_XDP protocol family Mar 2 13:04:37.336669 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 2 13:04:37.337014 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 2 13:04:37.337219 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:04:37.337405 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:04:37.337628 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:04:37.337812 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:04:37.338087 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:04:37.338284 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 2 13:04:37.338307 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:04:37.338322 kernel: Initialise system trusted keyrings Mar 2 13:04:37.338336 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:04:37.338358 kernel: Key type asymmetric registered Mar 2 13:04:37.338372 kernel: Asymmetric key parser 'x509' registered Mar 2 13:04:37.338385 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:04:37.338405 kernel: io scheduler mq-deadline registered Mar 2 13:04:37.338418 kernel: io scheduler kyber registered Mar 2 13:04:37.338431 kernel: io scheduler bfq registered Mar 2 13:04:37.338444 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:04:37.338458 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:04:37.338471 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:04:37.338484 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:04:37.338498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:04:37.338511 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:04:37.338526 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:04:37.338544 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:04:37.338558 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:04:37.338571 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:04:37.338920 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:04:37.339403 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:04:37.339731 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:04:36 UTC (1772456676) Mar 2 13:04:37.340190 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:04:37.340242 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:04:37.340256 kernel: efifb: probing for efifb Mar 2 13:04:37.340269 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 2 13:04:37.340280 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 2 13:04:37.340292 kernel: efifb: scrolling: redraw Mar 2 13:04:37.340334 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 2 13:04:37.340347 kernel: Console: switching to colour frame buffer device 100x37 Mar 2 13:04:37.340359 kernel: fb0: EFI VGA frame buffer device Mar 2 13:04:37.340371 kernel: pstore: Using crash dump compression: deflate Mar 2 13:04:37.340387 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 13:04:37.340399 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:04:37.340410 kernel: Segment Routing with IPv6 Mar 2 13:04:37.340422 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:04:37.340434 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:04:37.340445 kernel: Key type dns_resolver registered Mar 2 13:04:37.340457 kernel: IPI shorthand broadcast: enabled Mar 2 13:04:37.340497 kernel: sched_clock: Marking stable (3877026330, 606083632)->(5219792133, -736682171) Mar 2 13:04:37.340514 kernel: registered taskstats version 1 Mar 2 13:04:37.340529 kernel: Loading compiled-in X.509 certificates Mar 2 13:04:37.340541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:04:37.340554 kernel: Key type .fscrypt registered Mar 2 13:04:37.340566 kernel: Key type fscrypt-provisioning registered Mar 2 13:04:37.340578 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:04:37.340591 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:04:37.340603 kernel: ima: No architecture policies found Mar 2 13:04:37.340615 kernel: clk: Disabling unused clocks Mar 2 13:04:37.340627 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:04:37.340643 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:04:37.340656 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:04:37.340668 kernel: Run /init as init process Mar 2 13:04:37.340680 kernel: with arguments: Mar 2 13:04:37.340692 kernel: /init Mar 2 13:04:37.340704 kernel: with environment: Mar 2 13:04:37.340715 kernel: HOME=/ Mar 2 13:04:37.340727 kernel: TERM=linux Mar 2 13:04:37.340781 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:04:37.340805 systemd[1]: Detected virtualization kvm. Mar 2 13:04:37.340818 systemd[1]: Detected architecture x86-64. Mar 2 13:04:37.340831 systemd[1]: Running in initrd. Mar 2 13:04:37.340843 systemd[1]: No hostname configured, using default hostname. Mar 2 13:04:37.340855 systemd[1]: Hostname set to . Mar 2 13:04:37.340868 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:04:37.340880 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:04:37.340898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:04:37.340911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:04:37.341005 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:04:37.341022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:04:37.341036 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:04:37.341060 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:04:37.341075 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:04:37.341088 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:04:37.341101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:04:37.341114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:04:37.341127 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:04:37.341140 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:04:37.341157 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:04:37.341170 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:04:37.341183 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:04:37.341196 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:04:37.341209 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:04:37.341222 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:04:37.341235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:04:37.341247 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:04:37.341264 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:04:37.341277 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:04:37.341290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:04:37.341303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:04:37.341316 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:04:37.341329 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:04:37.341341 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:04:37.341354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:04:37.341367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:04:37.341450 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 13:04:37.341486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:04:37.341499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:04:37.341512 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:04:37.341532 systemd-journald[195]: Journal started Mar 2 13:04:37.341556 systemd-journald[195]: Runtime Journal (/run/log/journal/2ba7a0872d1546ef932fad8274c96a7d) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:04:37.346122 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 13:04:37.350637 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:04:37.364472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:04:38.977911 kernel: hrtimer: interrupt took 6263015 ns Mar 2 13:04:38.978865 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:04:37.374183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:04:37.382240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:04:38.979825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:04:39.027662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:04:39.030149 kernel: Bridge firewalling registered Mar 2 13:04:39.028011 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 13:04:39.039416 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:04:39.044585 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:04:39.051213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:04:39.073538 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:04:39.083551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:04:39.093834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:04:39.120163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:04:39.140477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:04:39.146128 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:04:39.163303 dracut-cmdline[228]: dracut-dracut-053 Mar 2 13:04:39.168221 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:04:39.229705 systemd-resolved[232]: Positive Trust Anchors: Mar 2 13:04:39.229802 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:04:39.229860 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:04:39.237430 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 2 13:04:39.242770 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:04:39.259912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:04:39.325055 kernel: SCSI subsystem initialized Mar 2 13:04:39.339019 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:04:39.355033 kernel: iscsi: registered transport (tcp) Mar 2 13:04:39.381054 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:04:39.381168 kernel: QLogic iSCSI HBA Driver Mar 2 13:04:39.470036 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:04:39.485269 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:04:39.541357 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:04:39.541457 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:04:39.545590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:04:39.597097 kernel: raid6: avx2x4 gen() 22268 MB/s Mar 2 13:04:39.615324 kernel: raid6: avx2x2 gen() 19964 MB/s Mar 2 13:04:39.635117 kernel: raid6: avx2x1 gen() 14425 MB/s Mar 2 13:04:39.635232 kernel: raid6: using algorithm avx2x4 gen() 22268 MB/s Mar 2 13:04:39.655570 kernel: raid6: .... xor() 4010 MB/s, rmw enabled Mar 2 13:04:39.655668 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:04:39.683338 kernel: xor: automatically using best checksumming function avx Mar 2 13:04:39.889234 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:04:39.907878 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:04:39.928356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:04:39.947572 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 2 13:04:39.954040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:04:39.979322 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:04:40.000157 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 2 13:04:40.046511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:04:40.059337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:04:40.168678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:04:40.180507 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:04:40.212532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:04:40.218413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:04:40.226145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:04:40.231576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:04:40.242243 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:04:40.262049 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:04:40.265706 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:04:40.279077 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:04:40.285148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:04:40.305836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:04:40.306066 kernel: libata version 3.00 loaded. Mar 2 13:04:40.306133 kernel: GPT:9289727 != 19775487 Mar 2 13:04:40.306154 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:04:40.306193 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:04:40.306209 kernel: GPT:9289727 != 19775487 Mar 2 13:04:40.309673 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:04:40.309698 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:04:40.285363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:04:40.306152 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:04:40.309813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:04:40.310154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:04:40.318667 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:04:40.338185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:04:40.361971 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:04:40.370026 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Mar 2 13:04:40.372017 kernel: AES CTR mode by8 optimization enabled Mar 2 13:04:40.372505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:04:40.382646 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (459) Mar 2 13:04:40.382706 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:04:40.383038 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:04:40.372697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:04:40.393675 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:04:40.394053 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:04:40.402052 kernel: scsi host0: ahci Mar 2 13:04:40.403574 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:04:40.406620 kernel: scsi host1: ahci Mar 2 13:04:40.411117 kernel: scsi host2: ahci Mar 2 13:04:40.411489 kernel: scsi host3: ahci Mar 2 13:04:40.413300 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:04:40.425782 kernel: scsi host4: ahci Mar 2 13:04:40.414777 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:04:40.427098 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:04:40.442316 kernel: scsi host5: ahci Mar 2 13:04:40.442622 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 2 13:04:40.442647 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 2 13:04:40.442666 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 2 13:04:40.442685 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 2 13:04:40.442702 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 2 13:04:40.435722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:04:40.456735 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 2 13:04:40.463231 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:04:40.466767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:04:40.472751 disk-uuid[555]: Primary Header is updated. Mar 2 13:04:40.472751 disk-uuid[555]: Secondary Entries is updated. Mar 2 13:04:40.472751 disk-uuid[555]: Secondary Header is updated. Mar 2 13:04:40.486081 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:04:40.508466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:04:40.523186 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:04:40.550734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:04:40.767642 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:04:40.768057 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:04:40.780594 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:04:40.786285 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:04:40.791398 kernel: ata3.00: applying bridge limits Mar 2 13:04:40.792868 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:04:40.821703 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:04:40.826047 kernel: ata3.00: configured for UDMA/100 Mar 2 13:04:40.829042 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:04:40.829070 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:04:40.884766 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:04:40.885370 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:04:40.902064 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:04:41.507037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:04:41.509564 disk-uuid[557]: The operation has completed successfully. Mar 2 13:04:41.571287 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:04:41.571527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:04:41.615271 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:04:41.626549 sh[599]: Success Mar 2 13:04:41.647002 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:04:41.703434 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:04:41.719278 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:04:41.724385 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:04:41.746377 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:04:41.746418 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:04:41.746429 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:04:41.749031 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:04:41.751012 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:04:41.761033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:04:41.764021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:04:41.777143 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:04:41.780231 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:04:41.800780 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:04:41.800819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:04:41.800831 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:04:41.810017 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:04:41.823402 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:04:41.828203 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:04:41.835219 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:04:41.846183 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:04:41.989049 ignition[704]: Ignition 2.19.0 Mar 2 13:04:41.989079 ignition[704]: Stage: fetch-offline Mar 2 13:04:41.989198 ignition[704]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:04:41.989219 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:04:41.994784 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:04:41.989311 ignition[704]: parsed url from cmdline: "" Mar 2 13:04:41.989316 ignition[704]: no config URL provided Mar 2 13:04:41.989323 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:04:41.989333 ignition[704]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:04:41.989462 ignition[704]: op(1): [started] loading QEMU firmware config module Mar 2 13:04:41.989468 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:04:42.012184 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:04:42.022343 ignition[704]: op(1): [finished] loading QEMU firmware config module Mar 2 13:04:42.043178 systemd-networkd[789]: lo: Link UP Mar 2 13:04:42.043199 systemd-networkd[789]: lo: Gained carrier Mar 2 13:04:42.045715 systemd-networkd[789]: Enumeration completed Mar 2 13:04:42.046792 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:04:42.046797 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:04:42.048204 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:04:42.050215 systemd-networkd[789]: eth0: Link UP Mar 2 13:04:42.050220 systemd-networkd[789]: eth0: Gained carrier Mar 2 13:04:42.050228 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:04:42.051815 systemd[1]: Reached target network.target - Network. Mar 2 13:04:42.073053 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:04:42.204323 ignition[704]: parsing config with SHA512: e6dd02be8b5417ac150a603d2582a971850c7a7146e6f5d110d2438ed5a0a1a829cc776a47f1b53a5671a6a257aaa579114e058ad9f3b5ce19ff89b924966e62 Mar 2 13:04:42.210613 unknown[704]: fetched base config from "system" Mar 2 13:04:42.210648 unknown[704]: fetched user config from "qemu" Mar 2 13:04:42.211189 ignition[704]: fetch-offline: fetch-offline passed Mar 2 13:04:42.212216 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.87 Mar 2 13:04:42.211263 ignition[704]: Ignition finished successfully Mar 2 13:04:42.212265 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Mar 2 13:04:42.226157 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:04:42.231812 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:04:42.246128 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:04:42.279356 ignition[793]: Ignition 2.19.0 Mar 2 13:04:42.279381 ignition[793]: Stage: kargs Mar 2 13:04:42.279570 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:04:42.279584 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:04:42.291209 ignition[793]: kargs: kargs passed Mar 2 13:04:42.291288 ignition[793]: Ignition finished successfully Mar 2 13:04:42.297131 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:04:42.308242 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:04:42.344219 ignition[801]: Ignition 2.19.0 Mar 2 13:04:42.344243 ignition[801]: Stage: disks Mar 2 13:04:42.344519 ignition[801]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:04:42.344533 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:04:42.345413 ignition[801]: disks: disks passed Mar 2 13:04:42.345462 ignition[801]: Ignition finished successfully Mar 2 13:04:42.358574 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:04:42.360189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:04:42.368684 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:04:42.369825 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:04:42.379364 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:04:42.384033 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:04:42.399158 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:04:42.416188 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:04:42.421827 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:04:42.441147 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:04:42.545033 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:04:42.545219 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:04:42.547874 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:04:42.564073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:04:42.567350 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:04:42.586722 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 2 13:04:42.586820 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:04:42.586840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:04:42.587005 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:04:42.572137 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:04:42.572186 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:04:42.572211 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:04:42.608242 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:04:42.589129 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:04:42.593722 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:04:42.610105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:04:42.644878 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:04:42.653400 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:04:42.660900 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:04:42.668783 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:04:42.796635 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:04:42.808160 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:04:42.813584 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:04:42.817584 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:04:42.823826 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:04:43.116683 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:04:43.151662 ignition[931]: INFO : Ignition 2.19.0 Mar 2 13:04:43.151662 ignition[931]: INFO : Stage: mount Mar 2 13:04:43.156319 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:04:43.156319 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:04:43.156319 ignition[931]: INFO : mount: mount passed Mar 2 13:04:43.156319 ignition[931]: INFO : Ignition finished successfully Mar 2 13:04:43.168461 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:04:43.178214 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:04:43.189610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:04:43.209722 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 2 13:04:43.209756 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:04:43.209769 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:04:43.211978 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:04:43.219005 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:04:43.220794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:04:43.274548 ignition[961]: INFO : Ignition 2.19.0 Mar 2 13:04:43.274548 ignition[961]: INFO : Stage: files Mar 2 13:04:43.279155 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:04:43.279155 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:04:43.279155 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:04:43.292434 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:04:43.292434 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:04:43.302276 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:04:43.305757 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:04:43.309363 unknown[961]: wrote ssh authorized keys file for user: core Mar 2 13:04:43.312373 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:04:43.316462 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:04:43.321505 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:04:43.383371 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:04:43.407348 systemd-networkd[789]: eth0: Gained IPv6LL Mar 2 13:04:43.491715 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:04:43.491715 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:04:43.502864 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 13:04:43.627402 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:04:43.726561 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:04:43.726561 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:04:43.735544 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 2 13:04:43.982677 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:04:44.363804 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:04:44.363804 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 13:04:44.373009 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:04:44.432975 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:04:44.440273 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:04:44.447823 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:04:44.447823 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:04:44.447823 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:04:44.447823 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:04:44.447823 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:04:44.447823 ignition[961]: INFO : files: files passed Mar 2 13:04:44.447823 ignition[961]: INFO : Ignition finished successfully Mar 2 13:04:44.444808 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:04:44.464356 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:04:44.471785 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:04:44.478914 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:04:44.501110 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:04:44.479130 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:04:44.510217 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:04:44.510217 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:04:44.497263 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:04:44.521710 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:04:44.501271 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:04:44.522320 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:04:44.556821 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:04:44.557162 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:04:44.563018 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:04:44.568518 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:04:44.573856 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:04:44.581206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:04:44.602264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:04:44.619157 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:04:44.633105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:04:44.636269 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:04:44.642094 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:04:44.647345 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:04:44.647506 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:04:44.653833 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:04:44.658073 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:04:44.663480 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:04:44.664626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:04:44.665454 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:04:44.665857 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:04:44.666686 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:04:44.667521 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:04:44.667918 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:04:44.668340 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:04:44.668723 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:04:44.668878 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:04:44.770349 ignition[1015]: INFO : Ignition 2.19.0 Mar 2 13:04:44.770349 ignition[1015]: INFO : Stage: umount Mar 2 13:04:44.770349 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:04:44.770349 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:04:44.770349 ignition[1015]: INFO : umount: umount passed Mar 2 13:04:44.770349 ignition[1015]: INFO : Ignition finished successfully Mar 2 13:04:44.670060 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:04:44.670399 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:04:44.670796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:04:44.671236 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:04:44.671630 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:04:44.671791 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:04:44.672890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:04:44.673069 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:04:44.673393 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:04:44.673724 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:04:44.677064 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:04:44.677356 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:04:44.678066 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:04:44.678812 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:04:44.679075 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:04:44.679312 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:04:44.679414 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:04:44.679769 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:04:44.679893 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:04:44.680599 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:04:44.680748 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:04:44.733327 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:04:44.738670 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:04:44.742852 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:04:44.743116 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:04:44.747159 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:04:44.747408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:04:44.755627 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:04:44.755760 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:04:44.765551 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:04:44.765712 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:04:44.770523 systemd[1]: Stopped target network.target - Network. Mar 2 13:04:44.774572 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:04:44.774637 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:04:44.779274 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:04:44.779351 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:04:44.785348 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:04:44.785425 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:04:44.792595 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:04:44.792696 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:04:44.799383 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:04:44.804125 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:04:44.808098 systemd-networkd[789]: eth0: DHCPv6 lease lost Mar 2 13:04:44.811257 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:04:44.812190 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:04:44.812393 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:04:44.815853 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:04:44.816165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:04:44.824913 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:04:44.825100 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:04:44.839260 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:04:44.843105 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:04:44.843308 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:04:44.849504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:04:44.849567 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:04:44.858102 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:04:44.858180 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:04:44.863078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:04:44.863146 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:04:44.869126 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:04:44.874053 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:04:44.874184 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:04:44.891299 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:04:44.891379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:04:44.896767 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:04:44.897042 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:04:44.902391 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:04:45.247885 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 13:04:44.902531 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:04:44.907853 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:04:44.908010 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:04:44.912086 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:04:44.912135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:04:44.917308 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:04:44.917372 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:04:44.922470 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:04:44.922557 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:04:44.927622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:04:44.927680 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:04:44.947248 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:04:44.951606 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:04:44.951734 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:04:44.958074 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:04:44.958171 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:04:44.976669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:04:44.976808 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:04:45.019786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:04:45.020369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:04:45.027353 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:04:45.027584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:04:45.034783 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:04:45.128916 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:04:45.205467 systemd[1]: Switching root. Mar 2 13:04:45.321754 systemd-journald[195]: Journal stopped Mar 2 13:04:46.957265 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:04:46.957360 kernel: SELinux: policy capability open_perms=1 Mar 2 13:04:46.957385 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:04:46.957402 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:04:46.957425 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:04:46.957458 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:04:46.957475 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:04:46.957502 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:04:46.957531 kernel: audit: type=1403 audit(1772456685.446:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:04:46.957556 systemd[1]: Successfully loaded SELinux policy in 64.318ms. Mar 2 13:04:46.957586 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 46.514ms. Mar 2 13:04:46.957607 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:04:46.957623 systemd[1]: Detected virtualization kvm. Mar 2 13:04:46.957651 systemd[1]: Detected architecture x86-64. Mar 2 13:04:46.957672 systemd[1]: Detected first boot. Mar 2 13:04:46.957690 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:04:46.957708 zram_generator::config[1060]: No configuration found. Mar 2 13:04:46.957726 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:04:46.957745 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:04:46.957766 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:04:46.957784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:04:46.957808 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:04:46.957828 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:04:46.957848 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:04:46.957871 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:04:46.957892 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:04:46.957911 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:04:46.957980 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:04:46.958033 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:04:46.958059 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:04:46.958079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:04:46.958097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:04:46.958115 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:04:46.958133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:04:46.958160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:04:46.958179 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:04:46.958197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:04:46.958216 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:04:46.958238 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:04:46.958257 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:04:46.958275 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:04:46.958292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:04:46.958310 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:04:46.958331 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:04:46.958351 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:04:46.958370 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:04:46.958425 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:04:46.958446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:04:46.958464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:04:46.958483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:04:46.958502 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:04:46.958521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:04:46.958538 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:04:46.958556 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:04:46.958576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:04:46.958605 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:04:46.958628 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:04:46.958652 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:04:46.958675 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:04:46.958697 systemd[1]: Reached target machines.target - Containers. Mar 2 13:04:46.958719 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:04:46.958742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:04:46.958764 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:04:46.958793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:04:46.958818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:04:46.958841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:04:46.958864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:04:46.958886 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:04:46.958907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:04:46.958977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:04:46.959030 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:04:46.959051 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:04:46.959104 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:04:46.959124 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:04:46.959142 kernel: fuse: init (API version 7.39) Mar 2 13:04:46.959160 kernel: loop: module loaded Mar 2 13:04:46.959177 kernel: ACPI: bus type drm_connector registered Mar 2 13:04:46.959195 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:04:46.959213 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:04:46.959231 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:04:46.959250 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:04:46.959274 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:04:46.959319 systemd-journald[1144]: Collecting audit messages is disabled. Mar 2 13:04:46.959354 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:04:46.959374 systemd[1]: Stopped verity-setup.service. Mar 2 13:04:46.959395 systemd-journald[1144]: Journal started Mar 2 13:04:46.959424 systemd-journald[1144]: Runtime Journal (/run/log/journal/2ba7a0872d1546ef932fad8274c96a7d) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:04:46.283236 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:04:46.310324 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:04:46.311087 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:04:46.311524 systemd[1]: systemd-journald.service: Consumed 1.737s CPU time. Mar 2 13:04:46.966055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:04:46.970273 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:04:46.973393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:04:46.976236 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:04:46.979225 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:04:46.981839 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:04:46.984798 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:04:46.990876 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:04:46.993649 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:04:46.997089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:04:47.000785 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:04:47.001083 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:04:47.004464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:04:47.004688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:04:47.008294 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:04:47.008510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:04:47.011783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:04:47.012051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:04:47.015622 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:04:47.015836 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:04:47.019100 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:04:47.019305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:04:47.022393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:04:47.025603 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:04:47.029229 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:04:47.048872 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:04:47.059120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:04:47.064097 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:04:47.068665 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:04:47.068734 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:04:47.074439 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:04:47.081227 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:04:47.087867 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:04:47.094245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:04:47.096861 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:04:47.104478 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:04:47.108182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:04:47.110179 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:04:47.114383 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:04:47.119882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:04:47.124718 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:04:47.129727 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:04:47.136638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:04:47.209732 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:04:47.338513 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:04:47.343182 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:04:47.362242 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:04:47.374138 systemd-journald[1144]: Time spent on flushing to /var/log/journal/2ba7a0872d1546ef932fad8274c96a7d is 34.868ms for 997 entries. Mar 2 13:04:47.374138 systemd-journald[1144]: System Journal (/var/log/journal/2ba7a0872d1546ef932fad8274c96a7d) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:04:47.450553 systemd-journald[1144]: Received client request to flush runtime journal. Mar 2 13:04:47.450631 kernel: loop0: detected capacity change from 0 to 142488 Mar 2 13:04:47.450662 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:04:47.383485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:04:47.390207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:04:47.413228 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:04:47.416708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:04:47.439538 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 2 13:04:47.439562 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 2 13:04:47.449352 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:04:47.453753 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:04:47.466282 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:04:47.472532 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 2 13:04:47.482493 kernel: loop1: detected capacity change from 0 to 217752 Mar 2 13:04:47.481784 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:04:47.483482 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:04:47.606809 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 13:04:47.782357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:04:47.814160 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:04:47.963041 kernel: loop3: detected capacity change from 0 to 142488 Mar 2 13:04:47.963825 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 2 13:04:47.964538 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 2 13:04:47.981340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:04:48.015428 kernel: loop4: detected capacity change from 0 to 217752 Mar 2 13:04:48.036991 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 13:04:48.063794 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:04:48.065034 (sd-merge)[1201]: Merged extensions into '/usr'. Mar 2 13:04:48.072339 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:04:48.072387 systemd[1]: Reloading... Mar 2 13:04:48.410052 zram_generator::config[1231]: No configuration found. Mar 2 13:04:48.827357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:04:49.021629 systemd[1]: Reloading finished in 948 ms. Mar 2 13:04:49.030744 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:04:49.091759 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:04:49.106144 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:04:49.124244 systemd[1]: Starting ensure-sysext.service... Mar 2 13:04:49.128046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:04:49.450473 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:04:49.450493 systemd[1]: Reloading... Mar 2 13:04:49.474483 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:04:49.474905 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:04:49.476220 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:04:49.476500 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Mar 2 13:04:49.476594 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Mar 2 13:04:49.480889 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:04:49.480905 systemd-tmpfiles[1266]: Skipping /boot Mar 2 13:04:49.499642 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:04:49.499662 systemd-tmpfiles[1266]: Skipping /boot Mar 2 13:04:49.779993 zram_generator::config[1293]: No configuration found. Mar 2 13:04:49.924339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:04:50.062369 systemd[1]: Reloading finished in 611 ms. Mar 2 13:04:50.088803 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:04:50.113597 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:04:50.128208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:04:50.133301 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:04:50.138281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:04:50.148098 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:04:50.164336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:04:50.170729 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:04:50.176538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:04:50.176740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:04:50.192190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:04:50.213972 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 2 13:04:50.218403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:04:50.232268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:04:50.237475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:04:50.245504 augenrules[1356]: No rules Mar 2 13:04:50.250073 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:04:50.253318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:04:50.257630 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:04:50.266301 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:04:50.273561 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:04:50.282530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:04:50.282911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:04:50.302919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:04:50.303466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:04:50.309750 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:04:50.314819 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:04:50.315161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:04:50.332286 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:04:50.345366 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:04:50.353446 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:04:50.353622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:04:50.353834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:04:50.365202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:04:50.373215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:04:50.384254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:04:50.400507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1366) Mar 2 13:04:50.404222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:04:50.409436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:04:50.425277 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:04:50.431246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:04:50.435073 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:04:50.435196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:04:50.436630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:04:50.436865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:04:50.440716 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:04:50.441539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:04:50.446868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:04:50.447242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:04:50.571644 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:04:50.572259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:04:50.617496 systemd[1]: Finished ensure-sysext.service. Mar 2 13:04:50.622592 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:04:50.639449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:04:50.641089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:04:50.644039 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:04:50.646113 systemd-resolved[1336]: Positive Trust Anchors: Mar 2 13:04:50.646141 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:04:50.646169 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:04:50.651199 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:04:50.657758 systemd-resolved[1336]: Defaulting to hostname 'linux'. Mar 2 13:04:50.660029 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:04:50.663084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:04:50.666082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:04:50.679130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:04:50.690189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:04:50.995206 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:04:51.023624 systemd-networkd[1399]: lo: Link UP Mar 2 13:04:51.023647 systemd-networkd[1399]: lo: Gained carrier Mar 2 13:04:51.031276 systemd-networkd[1399]: Enumeration completed Mar 2 13:04:51.031478 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:04:51.034430 systemd[1]: Reached target network.target - Network. Mar 2 13:04:51.041680 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:04:51.041703 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:04:51.043423 systemd-networkd[1399]: eth0: Link UP Mar 2 13:04:51.043447 systemd-networkd[1399]: eth0: Gained carrier Mar 2 13:04:51.043462 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:04:51.051175 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:04:51.059110 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:04:51.078990 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 13:04:51.085491 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:04:51.085756 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:04:51.086726 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:04:51.098810 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:04:51.107138 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:04:51.107204 systemd-timesyncd[1413]: Initial clock synchronization to Mon 2026-03-02 13:04:51.231943 UTC. Mar 2 13:04:51.115099 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:04:51.120769 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:04:51.136176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:04:51.145981 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:04:51.307396 kernel: kvm_amd: TSC scaling supported Mar 2 13:04:51.307539 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:04:51.307570 kernel: kvm_amd: Nested Paging enabled Mar 2 13:04:51.309295 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:04:51.311678 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:04:51.327786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:04:51.363045 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:04:51.414284 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:04:51.640727 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:04:51.709813 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:04:51.757619 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:04:51.761200 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:04:51.764113 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:04:51.766775 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:04:51.769806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:04:51.773144 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:04:51.775991 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:04:51.778997 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:04:51.781906 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:04:51.782023 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:04:51.784141 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:04:51.795412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:04:51.801042 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:04:51.811346 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:04:51.815987 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:04:51.819619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:04:51.822522 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:04:51.825335 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:04:51.827759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:04:51.827812 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:04:51.829406 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:04:51.830819 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:04:51.833843 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:04:51.839485 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:04:51.845319 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:04:51.849132 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:04:51.851127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:04:51.858097 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:04:51.862321 jq[1439]: false Mar 2 13:04:51.864785 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:04:51.872262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:04:51.884167 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:04:51.887568 dbus-daemon[1438]: [system] SELinux support is enabled Mar 2 13:04:51.901675 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:04:51.902359 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:04:51.905250 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:04:51.911345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:04:51.917690 extend-filesystems[1440]: Found loop3 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found loop4 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found loop5 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found sr0 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda1 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda2 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda3 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found usr Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda4 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda6 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda7 Mar 2 13:04:51.917690 extend-filesystems[1440]: Found vda9 Mar 2 13:04:51.917690 extend-filesystems[1440]: Checking size of /dev/vda9 Mar 2 13:04:52.004223 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1366) Mar 2 13:04:52.004266 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:04:51.915303 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:04:52.004467 extend-filesystems[1440]: Resized partition /dev/vda9 Mar 2 13:04:51.929067 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:04:52.007189 jq[1456]: true Mar 2 13:04:52.007560 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:04:51.947477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:04:52.012644 update_engine[1454]: I20260302 13:04:51.983633 1454 main.cc:92] Flatcar Update Engine starting Mar 2 13:04:52.012644 update_engine[1454]: I20260302 13:04:52.011706 1454 update_check_scheduler.cc:74] Next update check in 10m51s Mar 2 13:04:51.947701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:04:51.948219 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:04:51.950053 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:04:51.968871 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:04:51.969393 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:04:52.032430 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:04:52.032475 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:04:52.034105 jq[1464]: true Mar 2 13:04:52.035321 systemd-logind[1448]: New seat seat0. Mar 2 13:04:52.226696 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:04:52.231035 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:04:52.238621 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:04:52.241107 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 13:04:52.277798 tar[1463]: linux-amd64/LICENSE Mar 2 13:04:52.277798 tar[1463]: linux-amd64/helm Mar 2 13:04:52.289293 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:04:52.289293 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:04:52.289293 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:04:52.253035 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:04:52.320636 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Mar 2 13:04:52.264823 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:04:52.267221 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:04:52.271286 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:04:52.271479 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:04:52.284348 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:04:52.295885 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:04:52.296273 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:04:52.463174 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:04:52.467414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:04:52.473996 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:04:52.506705 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:04:52.866809 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:04:52.920094 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:04:52.940105 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:04:53.019619 systemd-networkd[1399]: eth0: Gained IPv6LL Mar 2 13:04:53.028377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:04:53.032167 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:04:53.041259 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:04:53.046656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:04:53.055543 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:04:53.059905 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:04:53.060281 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:04:53.095865 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:04:53.125232 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:04:53.126496 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:04:53.130468 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:04:53.138648 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:04:53.251413 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:04:53.265565 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:04:53.274178 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:04:53.277362 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:04:53.656227 containerd[1469]: time="2026-03-02T13:04:53.655967204Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:04:53.692123 containerd[1469]: time="2026-03-02T13:04:53.692068766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.696539117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.696574794Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.696591916Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.696841278Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.696877514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.697008542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697142 containerd[1469]: time="2026-03-02T13:04:53.697029543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697511 containerd[1469]: time="2026-03-02T13:04:53.697489238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697577 containerd[1469]: time="2026-03-02T13:04:53.697563314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697631 containerd[1469]: time="2026-03-02T13:04:53.697617169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697674 containerd[1469]: time="2026-03-02T13:04:53.697662808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.697912 containerd[1469]: time="2026-03-02T13:04:53.697890244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.698399 containerd[1469]: time="2026-03-02T13:04:53.698378507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:04:53.698678 containerd[1469]: time="2026-03-02T13:04:53.698654720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:04:53.698743 containerd[1469]: time="2026-03-02T13:04:53.698729851Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:04:53.698932 containerd[1469]: time="2026-03-02T13:04:53.698913130Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:04:53.699156 containerd[1469]: time="2026-03-02T13:04:53.699136301Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:04:53.706570 containerd[1469]: time="2026-03-02T13:04:53.706545491Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:04:53.706719 containerd[1469]: time="2026-03-02T13:04:53.706701310Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:04:53.706824 containerd[1469]: time="2026-03-02T13:04:53.706810433Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:04:53.706878 containerd[1469]: time="2026-03-02T13:04:53.706865963Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:04:53.706929 containerd[1469]: time="2026-03-02T13:04:53.706917016Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:04:53.707228 containerd[1469]: time="2026-03-02T13:04:53.707207101Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:04:53.707560 containerd[1469]: time="2026-03-02T13:04:53.707540613Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:04:53.707770 containerd[1469]: time="2026-03-02T13:04:53.707747483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.707818420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.707835411Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.707848237Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.707862282Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.707906632Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.707923643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708023219Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708040748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708072372Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708085128Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708119850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708133773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708146539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708536 containerd[1469]: time="2026-03-02T13:04:53.708159112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708170638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708182622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708194027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708206508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708218695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708232110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708243587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708276440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708307567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708324882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708371456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708384141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.708785 containerd[1469]: time="2026-03-02T13:04:53.708397088Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:04:53.709114 containerd[1469]: time="2026-03-02T13:04:53.709094568Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:04:53.709408 containerd[1469]: time="2026-03-02T13:04:53.709389315Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:04:53.709463 containerd[1469]: time="2026-03-02T13:04:53.709449680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:04:53.709511 containerd[1469]: time="2026-03-02T13:04:53.709497717Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:04:53.709551 containerd[1469]: time="2026-03-02T13:04:53.709539578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.709642 containerd[1469]: time="2026-03-02T13:04:53.709626542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:04:53.709727 containerd[1469]: time="2026-03-02T13:04:53.709710782Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:04:53.709777 containerd[1469]: time="2026-03-02T13:04:53.709765043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:04:53.710511 containerd[1469]: time="2026-03-02T13:04:53.710413257Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:04:53.713406 containerd[1469]: time="2026-03-02T13:04:53.712378598Z" level=info msg="Connect containerd service" Mar 2 13:04:53.713406 containerd[1469]: time="2026-03-02T13:04:53.712604459Z" level=info msg="using legacy CRI server" Mar 2 13:04:53.713406 containerd[1469]: time="2026-03-02T13:04:53.712719788Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:04:53.713911 containerd[1469]: time="2026-03-02T13:04:53.713881207Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:04:54.091373 containerd[1469]: time="2026-03-02T13:04:54.090689895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:04:54.093015 containerd[1469]: time="2026-03-02T13:04:54.092631301Z" level=info msg="Start subscribing containerd event" Mar 2 13:04:54.095987 containerd[1469]: time="2026-03-02T13:04:54.094657141Z" level=info msg="Start recovering state" Mar 2 13:04:54.095987 containerd[1469]: time="2026-03-02T13:04:54.095044362Z" level=info msg="Start event monitor" Mar 2 13:04:54.095987 containerd[1469]: time="2026-03-02T13:04:54.095160255Z" level=info msg="Start snapshots syncer" Mar 2 13:04:54.095987 containerd[1469]: time="2026-03-02T13:04:54.095240765Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:04:54.095987 containerd[1469]: time="2026-03-02T13:04:54.095344259Z" level=info msg="Start streaming server" Mar 2 13:04:54.097689 containerd[1469]: time="2026-03-02T13:04:54.097666428Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:04:54.097805 containerd[1469]: time="2026-03-02T13:04:54.097789073Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:04:54.102056 containerd[1469]: time="2026-03-02T13:04:54.101931654Z" level=info msg="containerd successfully booted in 0.448416s" Mar 2 13:04:54.103497 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:04:54.167787 tar[1463]: linux-amd64/README.md Mar 2 13:04:54.196072 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:04:56.759323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:04:56.810392 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:04:56.811107 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:04:56.861305 systemd[1]: Startup finished in 4.100s (kernel) + 8.547s (initrd) + 11.476s (userspace) = 24.124s. Mar 2 13:04:57.239884 kubelet[1550]: E0302 13:04:57.239727 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:04:57.243630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:04:57.244003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:04:57.244526 systemd[1]: kubelet.service: Consumed 3.648s CPU time. Mar 2 13:05:01.655232 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:05:01.657066 systemd[1]: Started sshd@0-10.0.0.87:22-10.0.0.1:33608.service - OpenSSH per-connection server daemon (10.0.0.1:33608). Mar 2 13:05:01.714477 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 33608 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:01.716587 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:01.728497 systemd-logind[1448]: New session 1 of user core. Mar 2 13:05:01.729883 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:05:01.740281 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:05:01.755711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:05:01.769373 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:05:01.773628 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:05:01.887373 systemd[1567]: Queued start job for default target default.target. Mar 2 13:05:01.897630 systemd[1567]: Created slice app.slice - User Application Slice. Mar 2 13:05:01.897675 systemd[1567]: Reached target paths.target - Paths. Mar 2 13:05:01.897690 systemd[1567]: Reached target timers.target - Timers. Mar 2 13:05:01.899545 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:05:01.913184 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:05:01.913342 systemd[1567]: Reached target sockets.target - Sockets. Mar 2 13:05:01.913379 systemd[1567]: Reached target basic.target - Basic System. Mar 2 13:05:01.913423 systemd[1567]: Reached target default.target - Main User Target. Mar 2 13:05:01.913466 systemd[1567]: Startup finished in 131ms. Mar 2 13:05:01.913693 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:05:01.915538 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:05:01.987194 systemd[1]: Started sshd@1-10.0.0.87:22-10.0.0.1:33618.service - OpenSSH per-connection server daemon (10.0.0.1:33618). Mar 2 13:05:02.024898 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33618 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:02.026486 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:02.031609 systemd-logind[1448]: New session 2 of user core. Mar 2 13:05:02.041102 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:05:02.099498 sshd[1578]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:02.115096 systemd[1]: sshd@1-10.0.0.87:22-10.0.0.1:33618.service: Deactivated successfully. Mar 2 13:05:02.116759 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:05:02.118411 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:05:02.119886 systemd[1]: Started sshd@2-10.0.0.87:22-10.0.0.1:58770.service - OpenSSH per-connection server daemon (10.0.0.1:58770). Mar 2 13:05:02.121067 systemd-logind[1448]: Removed session 2. Mar 2 13:05:02.162158 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 58770 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:02.164213 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:02.170179 systemd-logind[1448]: New session 3 of user core. Mar 2 13:05:02.180206 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:05:02.234181 sshd[1585]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:02.245900 systemd[1]: sshd@2-10.0.0.87:22-10.0.0.1:58770.service: Deactivated successfully. Mar 2 13:05:02.248441 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:05:02.250411 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:05:02.268334 systemd[1]: Started sshd@3-10.0.0.87:22-10.0.0.1:58778.service - OpenSSH per-connection server daemon (10.0.0.1:58778). Mar 2 13:05:02.269577 systemd-logind[1448]: Removed session 3. Mar 2 13:05:02.301879 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 58778 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:02.303481 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:02.309197 systemd-logind[1448]: New session 4 of user core. Mar 2 13:05:02.326174 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:05:02.383335 sshd[1592]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:02.394702 systemd[1]: sshd@3-10.0.0.87:22-10.0.0.1:58778.service: Deactivated successfully. Mar 2 13:05:02.396478 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:05:02.398074 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:05:02.405267 systemd[1]: Started sshd@4-10.0.0.87:22-10.0.0.1:58794.service - OpenSSH per-connection server daemon (10.0.0.1:58794). Mar 2 13:05:02.406329 systemd-logind[1448]: Removed session 4. Mar 2 13:05:02.438487 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 58794 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:02.440008 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:02.444842 systemd-logind[1448]: New session 5 of user core. Mar 2 13:05:02.457167 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:05:02.519076 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:05:02.519498 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:05:02.537461 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 2 13:05:02.539705 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:02.546881 systemd[1]: sshd@4-10.0.0.87:22-10.0.0.1:58794.service: Deactivated successfully. Mar 2 13:05:02.548872 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:05:02.550563 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:05:02.552223 systemd[1]: Started sshd@5-10.0.0.87:22-10.0.0.1:58806.service - OpenSSH per-connection server daemon (10.0.0.1:58806). Mar 2 13:05:02.553526 systemd-logind[1448]: Removed session 5. Mar 2 13:05:02.614296 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 58806 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:02.616014 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:02.620998 systemd-logind[1448]: New session 6 of user core. Mar 2 13:05:02.628095 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:05:02.685431 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:05:02.685863 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:05:02.690743 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 2 13:05:02.698570 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:05:02.699010 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:05:02.725229 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:05:02.727657 auditctl[1614]: No rules Mar 2 13:05:02.728826 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:05:02.729142 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:05:02.731487 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:05:02.769363 augenrules[1632]: No rules Mar 2 13:05:02.771200 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:05:02.772609 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 2 13:05:02.774901 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:02.787038 systemd[1]: sshd@5-10.0.0.87:22-10.0.0.1:58806.service: Deactivated successfully. Mar 2 13:05:02.788870 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:05:02.790647 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:05:02.800261 systemd[1]: Started sshd@6-10.0.0.87:22-10.0.0.1:58812.service - OpenSSH per-connection server daemon (10.0.0.1:58812). Mar 2 13:05:02.801446 systemd-logind[1448]: Removed session 6. Mar 2 13:05:02.834393 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 58812 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:05:02.836034 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:02.840979 systemd-logind[1448]: New session 7 of user core. Mar 2 13:05:02.851104 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:05:02.907848 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:05:02.908377 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:05:03.189288 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:05:03.189473 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:05:03.472542 dockerd[1661]: time="2026-03-02T13:05:03.472299594Z" level=info msg="Starting up" Mar 2 13:05:03.580016 dockerd[1661]: time="2026-03-02T13:05:03.579838025Z" level=info msg="Loading containers: start." Mar 2 13:05:03.727985 kernel: Initializing XFRM netlink socket Mar 2 13:05:03.922232 systemd-networkd[1399]: docker0: Link UP Mar 2 13:05:03.964300 dockerd[1661]: time="2026-03-02T13:05:03.964058128Z" level=info msg="Loading containers: done." Mar 2 13:05:04.009521 dockerd[1661]: time="2026-03-02T13:05:04.008587091Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:05:04.009521 dockerd[1661]: time="2026-03-02T13:05:04.008793020Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:05:04.009521 dockerd[1661]: time="2026-03-02T13:05:04.009249462Z" level=info msg="Daemon has completed initialization" Mar 2 13:05:04.666723 dockerd[1661]: time="2026-03-02T13:05:04.666039872Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:05:04.667604 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:05:07.521681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:05:07.555016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:09.173264 containerd[1469]: time="2026-03-02T13:05:09.172561955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 2 13:05:11.882735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463529470.mount: Deactivated successfully. Mar 2 13:05:12.252846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:12.302040 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:05:12.759888 kubelet[1828]: E0302 13:05:12.759728 1828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:05:12.767831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:05:12.768168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:05:12.769032 systemd[1]: kubelet.service: Consumed 4.947s CPU time. Mar 2 13:05:14.151045 containerd[1469]: time="2026-03-02T13:05:14.150772945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:14.152060 containerd[1469]: time="2026-03-02T13:05:14.151367084Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 2 13:05:14.152704 containerd[1469]: time="2026-03-02T13:05:14.152640966Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:14.155999 containerd[1469]: time="2026-03-02T13:05:14.155961066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:14.157159 containerd[1469]: time="2026-03-02T13:05:14.157112878Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 4.984201624s" Mar 2 13:05:14.157243 containerd[1469]: time="2026-03-02T13:05:14.157215083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 2 13:05:14.158274 containerd[1469]: time="2026-03-02T13:05:14.158221593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 2 13:05:15.667833 containerd[1469]: time="2026-03-02T13:05:15.667612547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:15.669088 containerd[1469]: time="2026-03-02T13:05:15.668537621Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 2 13:05:15.669776 containerd[1469]: time="2026-03-02T13:05:15.669703111Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:15.673743 containerd[1469]: time="2026-03-02T13:05:15.673686547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:15.675006 containerd[1469]: time="2026-03-02T13:05:15.674968547Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.516706306s" Mar 2 13:05:15.675100 containerd[1469]: time="2026-03-02T13:05:15.675009142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 2 13:05:15.675767 containerd[1469]: time="2026-03-02T13:05:15.675711842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 2 13:05:16.631891 containerd[1469]: time="2026-03-02T13:05:16.631816312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:16.633036 containerd[1469]: time="2026-03-02T13:05:16.632459300Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 2 13:05:16.633874 containerd[1469]: time="2026-03-02T13:05:16.633834336Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:16.637137 containerd[1469]: time="2026-03-02T13:05:16.637040039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:16.638239 containerd[1469]: time="2026-03-02T13:05:16.638203497Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 962.440313ms" Mar 2 13:05:16.638316 containerd[1469]: time="2026-03-02T13:05:16.638245482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 2 13:05:16.638841 containerd[1469]: time="2026-03-02T13:05:16.638805602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 13:05:17.552307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912605144.mount: Deactivated successfully. Mar 2 13:05:17.884672 containerd[1469]: time="2026-03-02T13:05:17.884461383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:17.885451 containerd[1469]: time="2026-03-02T13:05:17.885403453Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 2 13:05:17.886788 containerd[1469]: time="2026-03-02T13:05:17.886747045Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:17.889869 containerd[1469]: time="2026-03-02T13:05:17.889767365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:17.890887 containerd[1469]: time="2026-03-02T13:05:17.890691011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.251858501s" Mar 2 13:05:17.890887 containerd[1469]: time="2026-03-02T13:05:17.890759487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 2 13:05:17.891381 containerd[1469]: time="2026-03-02T13:05:17.891356241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 2 13:05:18.342220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000649156.mount: Deactivated successfully. Mar 2 13:05:19.490453 containerd[1469]: time="2026-03-02T13:05:19.490342611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:19.492048 containerd[1469]: time="2026-03-02T13:05:19.491477001Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 2 13:05:19.492715 containerd[1469]: time="2026-03-02T13:05:19.492657530Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:19.496429 containerd[1469]: time="2026-03-02T13:05:19.496364613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:19.498079 containerd[1469]: time="2026-03-02T13:05:19.498044261Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.606656745s" Mar 2 13:05:19.498079 containerd[1469]: time="2026-03-02T13:05:19.498081006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 2 13:05:19.499122 containerd[1469]: time="2026-03-02T13:05:19.498563213Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:05:19.868832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916240752.mount: Deactivated successfully. Mar 2 13:05:19.876522 containerd[1469]: time="2026-03-02T13:05:19.876383970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:19.877489 containerd[1469]: time="2026-03-02T13:05:19.877415790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 13:05:19.878758 containerd[1469]: time="2026-03-02T13:05:19.878676040Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:19.882093 containerd[1469]: time="2026-03-02T13:05:19.882018284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:19.882750 containerd[1469]: time="2026-03-02T13:05:19.882658183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 384.056181ms" Mar 2 13:05:19.882750 containerd[1469]: time="2026-03-02T13:05:19.882705240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 13:05:19.883342 containerd[1469]: time="2026-03-02T13:05:19.883290601Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 2 13:05:20.335063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300242830.mount: Deactivated successfully. Mar 2 13:05:22.234277 containerd[1469]: time="2026-03-02T13:05:22.233779867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:22.236375 containerd[1469]: time="2026-03-02T13:05:22.234582098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 2 13:05:22.237529 containerd[1469]: time="2026-03-02T13:05:22.237423576Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:22.245591 containerd[1469]: time="2026-03-02T13:05:22.245420358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:05:22.248758 containerd[1469]: time="2026-03-02T13:05:22.248586296Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.365248198s" Mar 2 13:05:22.248854 containerd[1469]: time="2026-03-02T13:05:22.248784894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 2 13:05:23.188370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:05:23.198195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:23.648811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:23.680654 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:05:23.788722 kubelet[2053]: E0302 13:05:23.788482 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:05:23.795903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:05:23.796335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:05:25.098982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:25.109335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:25.157762 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-7.scope)... Mar 2 13:05:25.157911 systemd[1]: Reloading... Mar 2 13:05:25.305011 zram_generator::config[2108]: No configuration found. Mar 2 13:05:25.517516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:05:25.667119 systemd[1]: Reloading finished in 508 ms. Mar 2 13:05:25.752261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:25.759556 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:05:25.760139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:25.775508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:25.983745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:25.990565 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:05:26.102582 kubelet[2159]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:05:26.358286 kubelet[2159]: I0302 13:05:26.358031 2159 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:05:26.358286 kubelet[2159]: I0302 13:05:26.358090 2159 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:05:26.358286 kubelet[2159]: I0302 13:05:26.358142 2159 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:05:26.358286 kubelet[2159]: I0302 13:05:26.358192 2159 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:05:26.358576 kubelet[2159]: I0302 13:05:26.358485 2159 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:05:26.455135 kubelet[2159]: E0302 13:05:26.454827 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:05:26.455669 kubelet[2159]: I0302 13:05:26.455549 2159 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:05:26.519404 kubelet[2159]: E0302 13:05:26.518550 2159 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:05:26.520098 kubelet[2159]: I0302 13:05:26.519622 2159 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:05:26.552611 kubelet[2159]: I0302 13:05:26.552336 2159 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:05:26.554144 kubelet[2159]: I0302 13:05:26.554007 2159 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:05:26.554752 kubelet[2159]: I0302 13:05:26.554114 2159 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:05:26.554752 kubelet[2159]: I0302 13:05:26.554727 2159 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:05:26.554752 kubelet[2159]: I0302 13:05:26.554738 2159 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:05:26.555586 kubelet[2159]: I0302 13:05:26.555023 2159 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:05:26.557872 kubelet[2159]: I0302 13:05:26.557791 2159 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:05:26.558517 kubelet[2159]: I0302 13:05:26.558455 2159 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:05:26.558586 kubelet[2159]: I0302 13:05:26.558522 2159 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:05:26.558656 kubelet[2159]: I0302 13:05:26.558624 2159 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:05:26.558693 kubelet[2159]: I0302 13:05:26.558663 2159 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:05:26.563701 kubelet[2159]: I0302 13:05:26.563638 2159 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:05:26.567336 kubelet[2159]: I0302 13:05:26.567243 2159 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:05:26.567336 kubelet[2159]: I0302 13:05:26.567304 2159 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:05:26.567568 kubelet[2159]: W0302 13:05:26.567487 2159 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:05:26.579993 kubelet[2159]: I0302 13:05:26.578770 2159 server.go:1257] "Started kubelet" Mar 2 13:05:26.579993 kubelet[2159]: I0302 13:05:26.579817 2159 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:05:26.581967 kubelet[2159]: I0302 13:05:26.581706 2159 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:05:26.582215 kubelet[2159]: I0302 13:05:26.582130 2159 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:05:26.582750 kubelet[2159]: I0302 13:05:26.582693 2159 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:05:26.584512 kubelet[2159]: I0302 13:05:26.584466 2159 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:05:26.587815 kubelet[2159]: E0302 13:05:26.585843 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189908007f77ace9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:05:26.578572521 +0000 UTC m=+0.580101099,LastTimestamp:2026-03-02 13:05:26.578572521 +0000 UTC m=+0.580101099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:05:26.589645 kubelet[2159]: E0302 13:05:26.589582 2159 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:05:26.590558 kubelet[2159]: I0302 13:05:26.590539 2159 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:05:26.591469 kubelet[2159]: I0302 13:05:26.591418 2159 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:05:26.595481 kubelet[2159]: E0302 13:05:26.595433 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:26.595573 kubelet[2159]: I0302 13:05:26.595499 2159 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:05:26.595812 kubelet[2159]: I0302 13:05:26.595769 2159 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:05:26.596017 kubelet[2159]: I0302 13:05:26.595975 2159 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:05:26.597400 kubelet[2159]: I0302 13:05:26.597320 2159 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:05:26.597675 kubelet[2159]: E0302 13:05:26.597609 2159 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="200ms" Mar 2 13:05:26.601063 kubelet[2159]: I0302 13:05:26.600999 2159 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:05:26.601063 kubelet[2159]: I0302 13:05:26.601041 2159 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:05:26.624024 kubelet[2159]: I0302 13:05:26.623841 2159 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:05:26.624024 kubelet[2159]: I0302 13:05:26.623875 2159 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:05:26.624024 kubelet[2159]: I0302 13:05:26.623973 2159 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:05:26.628557 kubelet[2159]: I0302 13:05:26.628509 2159 policy_none.go:50] "Start" Mar 2 13:05:26.628623 kubelet[2159]: I0302 13:05:26.628571 2159 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:05:26.628623 kubelet[2159]: I0302 13:05:26.628611 2159 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:05:26.631220 kubelet[2159]: I0302 13:05:26.631162 2159 policy_none.go:44] "Start" Mar 2 13:05:26.638691 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:05:26.644140 kubelet[2159]: I0302 13:05:26.644036 2159 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:05:26.647193 kubelet[2159]: I0302 13:05:26.647131 2159 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:05:26.647629 kubelet[2159]: I0302 13:05:26.647531 2159 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:05:26.647629 kubelet[2159]: I0302 13:05:26.647580 2159 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:05:26.647719 kubelet[2159]: E0302 13:05:26.647636 2159 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:05:26.661567 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:05:26.687798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:05:26.690329 kubelet[2159]: E0302 13:05:26.689828 2159 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:05:26.690329 kubelet[2159]: I0302 13:05:26.690193 2159 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:05:26.690654 kubelet[2159]: I0302 13:05:26.690207 2159 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:05:26.690899 kubelet[2159]: I0302 13:05:26.690834 2159 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:05:26.693423 kubelet[2159]: E0302 13:05:26.693390 2159 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:05:26.693477 kubelet[2159]: E0302 13:05:26.693436 2159 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:05:26.766020 systemd[1]: Created slice kubepods-burstable-podd5c55cacb33aeff715d9c6ca534df2fc.slice - libcontainer container kubepods-burstable-podd5c55cacb33aeff715d9c6ca534df2fc.slice. Mar 2 13:05:26.775470 kubelet[2159]: E0302 13:05:26.775392 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:26.780021 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 2 13:05:26.788032 kubelet[2159]: E0302 13:05:26.787891 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:26.792661 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 2 13:05:26.793613 kubelet[2159]: I0302 13:05:26.793500 2159 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:26.794285 kubelet[2159]: E0302 13:05:26.794214 2159 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 2 13:05:26.795731 kubelet[2159]: E0302 13:05:26.795668 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:26.796776 kubelet[2159]: I0302 13:05:26.796734 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5c55cacb33aeff715d9c6ca534df2fc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5c55cacb33aeff715d9c6ca534df2fc\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:26.796911 kubelet[2159]: I0302 13:05:26.796808 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:26.796911 kubelet[2159]: I0302 13:05:26.796837 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:26.796911 kubelet[2159]: I0302 13:05:26.796866 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:26.796911 kubelet[2159]: I0302 13:05:26.796891 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5c55cacb33aeff715d9c6ca534df2fc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5c55cacb33aeff715d9c6ca534df2fc\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:26.797170 kubelet[2159]: I0302 13:05:26.796914 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5c55cacb33aeff715d9c6ca534df2fc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5c55cacb33aeff715d9c6ca534df2fc\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:26.797170 kubelet[2159]: I0302 13:05:26.797058 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:26.797170 kubelet[2159]: I0302 13:05:26.797092 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:26.797170 kubelet[2159]: I0302 13:05:26.797119 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:26.798306 kubelet[2159]: E0302 13:05:26.798212 2159 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="400ms" Mar 2 13:05:27.029312 kubelet[2159]: I0302 13:05:27.027856 2159 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:27.029312 kubelet[2159]: E0302 13:05:27.028618 2159 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 2 13:05:27.092874 kubelet[2159]: E0302 13:05:27.092330 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:27.098327 containerd[1469]: time="2026-03-02T13:05:27.097815472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5c55cacb33aeff715d9c6ca534df2fc,Namespace:kube-system,Attempt:0,}" Mar 2 13:05:27.099709 containerd[1469]: time="2026-03-02T13:05:27.099356208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 2 13:05:27.099776 kubelet[2159]: E0302 13:05:27.098221 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:27.101747 kubelet[2159]: E0302 13:05:27.101632 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:27.108845 containerd[1469]: time="2026-03-02T13:05:27.108658880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 2 13:05:27.289635 kubelet[2159]: E0302 13:05:27.287991 2159 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="800ms" Mar 2 13:05:27.433715 kubelet[2159]: I0302 13:05:27.433477 2159 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:27.434774 kubelet[2159]: E0302 13:05:27.434553 2159 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 2 13:05:27.845912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062076136.mount: Deactivated successfully. Mar 2 13:05:27.852909 containerd[1469]: time="2026-03-02T13:05:27.852774433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:05:27.856484 containerd[1469]: time="2026-03-02T13:05:27.856346912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:05:27.857806 containerd[1469]: time="2026-03-02T13:05:27.857709867Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:05:27.859323 containerd[1469]: time="2026-03-02T13:05:27.859171431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:05:27.860262 containerd[1469]: time="2026-03-02T13:05:27.860181524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:05:27.861667 containerd[1469]: time="2026-03-02T13:05:27.861624517Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:05:27.862537 containerd[1469]: time="2026-03-02T13:05:27.862456450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:05:27.868156 containerd[1469]: time="2026-03-02T13:05:27.868036607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:05:27.869051 containerd[1469]: time="2026-03-02T13:05:27.869006233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 769.52342ms" Mar 2 13:05:27.869769 containerd[1469]: time="2026-03-02T13:05:27.869703167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 771.473901ms" Mar 2 13:05:27.871742 containerd[1469]: time="2026-03-02T13:05:27.871670026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 762.788286ms" Mar 2 13:05:28.337511 kubelet[2159]: E0302 13:05:28.335567 2159 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="1.6s" Mar 2 13:05:28.368407 kubelet[2159]: I0302 13:05:28.368069 2159 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:28.378848 kubelet[2159]: E0302 13:05:28.378361 2159 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 2 13:05:28.666204 kubelet[2159]: E0302 13:05:28.666103 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:05:28.684037 containerd[1469]: time="2026-03-02T13:05:28.683558886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:05:28.684712 containerd[1469]: time="2026-03-02T13:05:28.683985733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:05:28.684712 containerd[1469]: time="2026-03-02T13:05:28.684082947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:28.684712 containerd[1469]: time="2026-03-02T13:05:28.684327208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:28.701164 containerd[1469]: time="2026-03-02T13:05:28.700818527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:05:28.701164 containerd[1469]: time="2026-03-02T13:05:28.701030042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:05:28.701164 containerd[1469]: time="2026-03-02T13:05:28.701056976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:28.703325 containerd[1469]: time="2026-03-02T13:05:28.701179853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:28.712784 containerd[1469]: time="2026-03-02T13:05:28.709745811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:05:28.712784 containerd[1469]: time="2026-03-02T13:05:28.710308831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:05:28.712784 containerd[1469]: time="2026-03-02T13:05:28.710334432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:28.732309 containerd[1469]: time="2026-03-02T13:05:28.732016552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:28.760626 systemd[1]: Started cri-containerd-688254b590b3e3ff97bd09f7dc62434b70b7696bcd78e841d2b3472ca58b0356.scope - libcontainer container 688254b590b3e3ff97bd09f7dc62434b70b7696bcd78e841d2b3472ca58b0356. Mar 2 13:05:29.133376 systemd[1]: Started cri-containerd-df8097060de10677085e880dff83be0f4e756ad96233a8d35700b60baef86104.scope - libcontainer container df8097060de10677085e880dff83be0f4e756ad96233a8d35700b60baef86104. Mar 2 13:05:29.177444 systemd[1]: Started cri-containerd-0d461643a4b501020ebc095cb1867307cc5ecb4f0c38c14ff94f5777a2731c91.scope - libcontainer container 0d461643a4b501020ebc095cb1867307cc5ecb4f0c38c14ff94f5777a2731c91. Mar 2 13:05:29.488016 containerd[1469]: time="2026-03-02T13:05:29.487915195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5c55cacb33aeff715d9c6ca534df2fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"688254b590b3e3ff97bd09f7dc62434b70b7696bcd78e841d2b3472ca58b0356\"" Mar 2 13:05:29.490542 kubelet[2159]: E0302 13:05:29.490477 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:29.511093 containerd[1469]: time="2026-03-02T13:05:29.510488696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d461643a4b501020ebc095cb1867307cc5ecb4f0c38c14ff94f5777a2731c91\"" Mar 2 13:05:29.512465 kubelet[2159]: E0302 13:05:29.512432 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:29.744127 containerd[1469]: time="2026-03-02T13:05:29.742836828Z" level=info msg="CreateContainer within sandbox \"688254b590b3e3ff97bd09f7dc62434b70b7696bcd78e841d2b3472ca58b0356\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:05:29.748848 containerd[1469]: time="2026-03-02T13:05:29.747971467Z" level=info msg="CreateContainer within sandbox \"0d461643a4b501020ebc095cb1867307cc5ecb4f0c38c14ff94f5777a2731c91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:05:29.761889 containerd[1469]: time="2026-03-02T13:05:29.761505042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"df8097060de10677085e880dff83be0f4e756ad96233a8d35700b60baef86104\"" Mar 2 13:05:29.764578 kubelet[2159]: E0302 13:05:29.764540 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:29.774067 containerd[1469]: time="2026-03-02T13:05:29.774028852Z" level=info msg="CreateContainer within sandbox \"df8097060de10677085e880dff83be0f4e756ad96233a8d35700b60baef86104\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:05:29.775114 containerd[1469]: time="2026-03-02T13:05:29.775034202Z" level=info msg="CreateContainer within sandbox \"688254b590b3e3ff97bd09f7dc62434b70b7696bcd78e841d2b3472ca58b0356\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e14db598fd4630bfeab32cd380837d70de791f49e5e17d7b6f96f47b87556ba7\"" Mar 2 13:05:29.776768 containerd[1469]: time="2026-03-02T13:05:29.776679192Z" level=info msg="StartContainer for \"e14db598fd4630bfeab32cd380837d70de791f49e5e17d7b6f96f47b87556ba7\"" Mar 2 13:05:29.779527 kubelet[2159]: E0302 13:05:29.779324 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189908007f77ace9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:05:26.578572521 +0000 UTC m=+0.580101099,LastTimestamp:2026-03-02 13:05:26.578572521 +0000 UTC m=+0.580101099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:05:29.799770 containerd[1469]: time="2026-03-02T13:05:29.799562687Z" level=info msg="CreateContainer within sandbox \"0d461643a4b501020ebc095cb1867307cc5ecb4f0c38c14ff94f5777a2731c91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"312da512c054cc0ac7236721e4fb8d5cd72d5a46c4091c7fd1042d6b2e5d1906\"" Mar 2 13:05:29.801017 containerd[1469]: time="2026-03-02T13:05:29.800759570Z" level=info msg="StartContainer for \"312da512c054cc0ac7236721e4fb8d5cd72d5a46c4091c7fd1042d6b2e5d1906\"" Mar 2 13:05:29.808424 containerd[1469]: time="2026-03-02T13:05:29.808317025Z" level=info msg="CreateContainer within sandbox \"df8097060de10677085e880dff83be0f4e756ad96233a8d35700b60baef86104\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"802efe95048e05b3f33291f04134567627ec72d9921fa208ea71ba324f222a39\"" Mar 2 13:05:29.811255 containerd[1469]: time="2026-03-02T13:05:29.809842364Z" level=info msg="StartContainer for \"802efe95048e05b3f33291f04134567627ec72d9921fa208ea71ba324f222a39\"" Mar 2 13:05:29.861209 systemd[1]: Started cri-containerd-e14db598fd4630bfeab32cd380837d70de791f49e5e17d7b6f96f47b87556ba7.scope - libcontainer container e14db598fd4630bfeab32cd380837d70de791f49e5e17d7b6f96f47b87556ba7. Mar 2 13:05:29.873222 systemd[1]: Started cri-containerd-312da512c054cc0ac7236721e4fb8d5cd72d5a46c4091c7fd1042d6b2e5d1906.scope - libcontainer container 312da512c054cc0ac7236721e4fb8d5cd72d5a46c4091c7fd1042d6b2e5d1906. Mar 2 13:05:29.947430 kubelet[2159]: E0302 13:05:29.947237 2159 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="3.2s" Mar 2 13:05:30.030120 systemd[1]: Started cri-containerd-802efe95048e05b3f33291f04134567627ec72d9921fa208ea71ba324f222a39.scope - libcontainer container 802efe95048e05b3f33291f04134567627ec72d9921fa208ea71ba324f222a39. Mar 2 13:05:30.037391 kubelet[2159]: I0302 13:05:30.037308 2159 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:30.037907 kubelet[2159]: E0302 13:05:30.037770 2159 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 2 13:05:30.106584 containerd[1469]: time="2026-03-02T13:05:30.106496928Z" level=info msg="StartContainer for \"e14db598fd4630bfeab32cd380837d70de791f49e5e17d7b6f96f47b87556ba7\" returns successfully" Mar 2 13:05:30.156276 containerd[1469]: time="2026-03-02T13:05:30.141968803Z" level=info msg="StartContainer for \"312da512c054cc0ac7236721e4fb8d5cd72d5a46c4091c7fd1042d6b2e5d1906\" returns successfully" Mar 2 13:05:30.336867 containerd[1469]: time="2026-03-02T13:05:30.336412250Z" level=info msg="StartContainer for \"802efe95048e05b3f33291f04134567627ec72d9921fa208ea71ba324f222a39\" returns successfully" Mar 2 13:05:30.861410 kubelet[2159]: E0302 13:05:30.861219 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:30.861410 kubelet[2159]: E0302 13:05:30.861432 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:30.865384 kubelet[2159]: E0302 13:05:30.865177 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:30.865384 kubelet[2159]: E0302 13:05:30.865292 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:30.868462 kubelet[2159]: E0302 13:05:30.868224 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:30.868462 kubelet[2159]: E0302 13:05:30.868387 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:31.900365 kubelet[2159]: E0302 13:05:31.900028 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:31.900365 kubelet[2159]: E0302 13:05:31.900280 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:31.903250 kubelet[2159]: E0302 13:05:31.901823 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:31.903250 kubelet[2159]: E0302 13:05:31.902134 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:31.903250 kubelet[2159]: E0302 13:05:31.902733 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:31.903250 kubelet[2159]: E0302 13:05:31.902883 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:32.912163 kubelet[2159]: E0302 13:05:32.911665 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:32.912163 kubelet[2159]: E0302 13:05:32.911873 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:32.912163 kubelet[2159]: E0302 13:05:32.912225 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:32.913481 kubelet[2159]: E0302 13:05:32.912336 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:33.244411 kubelet[2159]: I0302 13:05:33.243514 2159 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:34.483234 kubelet[2159]: E0302 13:05:34.482860 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:34.483234 kubelet[2159]: E0302 13:05:34.483281 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:35.002306 kubelet[2159]: E0302 13:05:35.002193 2159 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:05:35.128410 kubelet[2159]: I0302 13:05:35.128131 2159 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 13:05:35.128410 kubelet[2159]: E0302 13:05:35.128196 2159 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:05:35.144403 kubelet[2159]: E0302 13:05:35.144318 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.244896 kubelet[2159]: E0302 13:05:35.244840 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.310293 kubelet[2159]: E0302 13:05:35.310129 2159 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:05:35.310412 kubelet[2159]: E0302 13:05:35.310382 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:35.346141 kubelet[2159]: E0302 13:05:35.346039 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.447145 kubelet[2159]: E0302 13:05:35.447006 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.548231 kubelet[2159]: E0302 13:05:35.548146 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.650108 kubelet[2159]: E0302 13:05:35.649129 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.749489 kubelet[2159]: E0302 13:05:35.749389 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.850826 kubelet[2159]: E0302 13:05:35.850709 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:35.954274 kubelet[2159]: E0302 13:05:35.953200 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:36.367012 kubelet[2159]: E0302 13:05:36.364729 2159 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:05:36.399444 kubelet[2159]: I0302 13:05:36.398145 2159 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:36.409523 kubelet[2159]: I0302 13:05:36.409446 2159 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:36.438010 kubelet[2159]: I0302 13:05:36.437836 2159 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:37.319445 kubelet[2159]: I0302 13:05:37.294627 2159 apiserver.go:52] "Watching apiserver" Mar 2 13:05:37.498825 kubelet[2159]: E0302 13:05:37.496509 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:37.498825 kubelet[2159]: E0302 13:05:37.497486 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:37.498825 kubelet[2159]: I0302 13:05:37.497657 2159 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:05:37.517039 kubelet[2159]: E0302 13:05:37.516108 2159 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:37.545718 systemd[1]: Reloading requested from client PID 2454 ('systemctl') (unit session-7.scope)... Mar 2 13:05:37.545763 systemd[1]: Reloading... Mar 2 13:05:37.667004 zram_generator::config[2493]: No configuration found. Mar 2 13:05:37.744677 update_engine[1454]: I20260302 13:05:37.744390 1454 update_attempter.cc:509] Updating boot flags... Mar 2 13:05:37.841908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:05:37.963412 systemd[1]: Reloading finished in 416 ms. Mar 2 13:05:38.012027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2535) Mar 2 13:05:38.087116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2538) Mar 2 13:05:38.087513 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:38.166384 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:05:38.166817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:38.166908 systemd[1]: kubelet.service: Consumed 4.619s CPU time, 131.6M memory peak, 0B memory swap peak. Mar 2 13:05:38.196567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:05:38.227991 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2538) Mar 2 13:05:38.477587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:05:38.500510 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:05:38.571641 kubelet[2554]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:05:38.583684 kubelet[2554]: I0302 13:05:38.583579 2554 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:05:38.583684 kubelet[2554]: I0302 13:05:38.583655 2554 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:05:38.583684 kubelet[2554]: I0302 13:05:38.583685 2554 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:05:38.583684 kubelet[2554]: I0302 13:05:38.583696 2554 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:05:38.584239 kubelet[2554]: I0302 13:05:38.584182 2554 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:05:38.584243 sudo[2567]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 13:05:38.584828 sudo[2567]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 13:05:38.586350 kubelet[2554]: I0302 13:05:38.586285 2554 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:05:38.596505 kubelet[2554]: I0302 13:05:38.596432 2554 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:05:38.612216 kubelet[2554]: E0302 13:05:38.612136 2554 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:05:38.612394 kubelet[2554]: I0302 13:05:38.612237 2554 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:05:38.622139 kubelet[2554]: I0302 13:05:38.622031 2554 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:05:38.622613 kubelet[2554]: I0302 13:05:38.622540 2554 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:05:38.622866 kubelet[2554]: I0302 13:05:38.622608 2554 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:05:38.623117 kubelet[2554]: I0302 13:05:38.622876 2554 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:05:38.623117 kubelet[2554]: I0302 13:05:38.622893 2554 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:05:38.623117 kubelet[2554]: I0302 13:05:38.623018 2554 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:05:38.623989 kubelet[2554]: I0302 13:05:38.623386 2554 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:05:38.623989 kubelet[2554]: I0302 13:05:38.623630 2554 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:05:38.623989 kubelet[2554]: I0302 13:05:38.623660 2554 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:05:38.623989 kubelet[2554]: I0302 13:05:38.623686 2554 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:05:38.623989 kubelet[2554]: I0302 13:05:38.623701 2554 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:05:38.642920 kubelet[2554]: I0302 13:05:38.642847 2554 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:05:38.645657 kubelet[2554]: I0302 13:05:38.645595 2554 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:05:38.645657 kubelet[2554]: I0302 13:05:38.645659 2554 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:05:38.652205 kubelet[2554]: I0302 13:05:38.652075 2554 server.go:1257] "Started kubelet" Mar 2 13:05:38.654879 kubelet[2554]: I0302 13:05:38.653695 2554 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:05:38.654879 kubelet[2554]: I0302 13:05:38.653540 2554 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:05:38.654879 kubelet[2554]: I0302 13:05:38.654113 2554 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:05:38.654879 kubelet[2554]: I0302 13:05:38.654402 2554 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:05:38.654879 kubelet[2554]: I0302 13:05:38.654466 2554 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:05:38.659486 kubelet[2554]: I0302 13:05:38.659427 2554 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:05:38.661103 kubelet[2554]: I0302 13:05:38.661067 2554 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:05:38.661168 kubelet[2554]: I0302 13:05:38.661143 2554 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:05:38.661634 kubelet[2554]: I0302 13:05:38.661257 2554 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:05:38.666518 kubelet[2554]: I0302 13:05:38.666471 2554 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:05:38.668379 kubelet[2554]: I0302 13:05:38.668243 2554 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:05:38.677437 kubelet[2554]: I0302 13:05:38.677074 2554 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:05:38.677437 kubelet[2554]: I0302 13:05:38.677095 2554 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:05:38.694022 kubelet[2554]: I0302 13:05:38.693435 2554 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:05:38.698622 kubelet[2554]: I0302 13:05:38.698288 2554 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:05:38.698622 kubelet[2554]: I0302 13:05:38.698310 2554 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:05:38.698622 kubelet[2554]: I0302 13:05:38.698369 2554 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:05:38.698622 kubelet[2554]: E0302 13:05:38.698424 2554 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740426 2554 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740446 2554 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740466 2554 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740625 2554 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740636 2554 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740653 2554 policy_none.go:50] "Start" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740662 2554 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740674 2554 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740795 2554 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:05:38.742553 kubelet[2554]: I0302 13:05:38.740806 2554 policy_none.go:44] "Start" Mar 2 13:05:38.749497 kubelet[2554]: E0302 13:05:38.749475 2554 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:05:38.749864 kubelet[2554]: I0302 13:05:38.749849 2554 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:05:38.750130 kubelet[2554]: I0302 13:05:38.750100 2554 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:05:38.751997 kubelet[2554]: E0302 13:05:38.751765 2554 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:05:38.753972 kubelet[2554]: I0302 13:05:38.753873 2554 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:05:38.799302 kubelet[2554]: I0302 13:05:38.799195 2554 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:38.802980 kubelet[2554]: I0302 13:05:38.800392 2554 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:38.802980 kubelet[2554]: I0302 13:05:38.800890 2554 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.810722 kubelet[2554]: E0302 13:05:38.810645 2554 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:38.810722 kubelet[2554]: E0302 13:05:38.810699 2554 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:38.811541 kubelet[2554]: E0302 13:05:38.811490 2554 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.860669 kubelet[2554]: I0302 13:05:38.860608 2554 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:05:38.862728 kubelet[2554]: I0302 13:05:38.862680 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.862808 kubelet[2554]: I0302 13:05:38.862744 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:38.862808 kubelet[2554]: I0302 13:05:38.862772 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.862808 kubelet[2554]: I0302 13:05:38.862793 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.862986 kubelet[2554]: I0302 13:05:38.862813 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.862986 kubelet[2554]: I0302 13:05:38.862835 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5c55cacb33aeff715d9c6ca534df2fc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5c55cacb33aeff715d9c6ca534df2fc\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:38.862986 kubelet[2554]: I0302 13:05:38.862853 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5c55cacb33aeff715d9c6ca534df2fc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5c55cacb33aeff715d9c6ca534df2fc\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:38.862986 kubelet[2554]: I0302 13:05:38.862872 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5c55cacb33aeff715d9c6ca534df2fc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5c55cacb33aeff715d9c6ca534df2fc\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:38.862986 kubelet[2554]: I0302 13:05:38.862893 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:05:38.869502 kubelet[2554]: I0302 13:05:38.869430 2554 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 2 13:05:38.869595 kubelet[2554]: I0302 13:05:38.869530 2554 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 13:05:39.112273 kubelet[2554]: E0302 13:05:39.111528 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:39.112273 kubelet[2554]: E0302 13:05:39.112172 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:39.112480 kubelet[2554]: E0302 13:05:39.112325 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:39.195631 sudo[2567]: pam_unix(sudo:session): session closed for user root Mar 2 13:05:39.627711 kubelet[2554]: I0302 13:05:39.627631 2554 apiserver.go:52] "Watching apiserver" Mar 2 13:05:39.662377 kubelet[2554]: I0302 13:05:39.662275 2554 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:05:39.721878 kubelet[2554]: E0302 13:05:39.721772 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:39.723734 kubelet[2554]: I0302 13:05:39.722563 2554 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:39.723734 kubelet[2554]: I0302 13:05:39.723058 2554 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:39.736982 kubelet[2554]: E0302 13:05:39.734509 2554 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:05:39.736982 kubelet[2554]: E0302 13:05:39.734766 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:39.736982 kubelet[2554]: E0302 13:05:39.734815 2554 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:05:39.736982 kubelet[2554]: E0302 13:05:39.735551 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:39.748532 kubelet[2554]: I0302 13:05:39.748446 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.748390045 podStartE2EDuration="3.748390045s" podCreationTimestamp="2026-03-02 13:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:05:39.748368186 +0000 UTC m=+1.242472365" watchObservedRunningTime="2026-03-02 13:05:39.748390045 +0000 UTC m=+1.242494223" Mar 2 13:05:39.761165 kubelet[2554]: I0302 13:05:39.761099 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.761085529 podStartE2EDuration="3.761085529s" podCreationTimestamp="2026-03-02 13:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:05:39.760472298 +0000 UTC m=+1.254576477" watchObservedRunningTime="2026-03-02 13:05:39.761085529 +0000 UTC m=+1.255189708" Mar 2 13:05:39.779706 kubelet[2554]: I0302 13:05:39.779566 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.779548718 podStartE2EDuration="3.779548718s" podCreationTimestamp="2026-03-02 13:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:05:39.768984346 +0000 UTC m=+1.263088525" watchObservedRunningTime="2026-03-02 13:05:39.779548718 +0000 UTC m=+1.273652896" Mar 2 13:05:40.489185 sudo[1643]: pam_unix(sudo:session): session closed for user root Mar 2 13:05:40.492135 sshd[1640]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:40.497448 systemd[1]: sshd@6-10.0.0.87:22-10.0.0.1:58812.service: Deactivated successfully. Mar 2 13:05:40.499601 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:05:40.499897 systemd[1]: session-7.scope: Consumed 10.356s CPU time, 158.6M memory peak, 0B memory swap peak. Mar 2 13:05:40.500697 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:05:40.502824 systemd-logind[1448]: Removed session 7. Mar 2 13:05:40.724065 kubelet[2554]: E0302 13:05:40.723910 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:40.724810 kubelet[2554]: E0302 13:05:40.724307 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:41.736251 kubelet[2554]: E0302 13:05:41.735665 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:42.736576 kubelet[2554]: E0302 13:05:42.736536 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:42.878560 kubelet[2554]: I0302 13:05:42.878101 2554 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:05:42.879405 containerd[1469]: time="2026-03-02T13:05:42.879276361Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:05:42.881407 kubelet[2554]: I0302 13:05:42.881337 2554 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:05:43.251587 kubelet[2554]: E0302 13:05:43.251325 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.049579 systemd[1]: Created slice kubepods-besteffort-pod712efe89_0ca3_4af0_8a5a_a148333a74b8.slice - libcontainer container kubepods-besteffort-pod712efe89_0ca3_4af0_8a5a_a148333a74b8.slice. Mar 2 13:05:44.067171 systemd[1]: Created slice kubepods-burstable-pod5c90a5e1_8c29_42d3_add8_83bd26f96c60.slice - libcontainer container kubepods-burstable-pod5c90a5e1_8c29_42d3_add8_83bd26f96c60.slice. Mar 2 13:05:44.102596 systemd[1]: Created slice kubepods-besteffort-poda8a2edeb_2d52_4286_9ff3_468cffdbf3df.slice - libcontainer container kubepods-besteffort-poda8a2edeb_2d52_4286_9ff3_468cffdbf3df.slice. Mar 2 13:05:44.107183 kubelet[2554]: I0302 13:05:44.103822 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-run\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.107183 kubelet[2554]: I0302 13:05:44.103853 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-cgroup\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.107183 kubelet[2554]: I0302 13:05:44.103869 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-lib-modules\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.107183 kubelet[2554]: I0302 13:05:44.103883 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/712efe89-0ca3-4af0-8a5a-a148333a74b8-kube-proxy\") pod \"kube-proxy-lr62h\" (UID: \"712efe89-0ca3-4af0-8a5a-a148333a74b8\") " pod="kube-system/kube-proxy-lr62h" Mar 2 13:05:44.107183 kubelet[2554]: I0302 13:05:44.103895 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/712efe89-0ca3-4af0-8a5a-a148333a74b8-xtables-lock\") pod \"kube-proxy-lr62h\" (UID: \"712efe89-0ca3-4af0-8a5a-a148333a74b8\") " pod="kube-system/kube-proxy-lr62h" Mar 2 13:05:44.107183 kubelet[2554]: I0302 13:05:44.103907 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/712efe89-0ca3-4af0-8a5a-a148333a74b8-lib-modules\") pod \"kube-proxy-lr62h\" (UID: \"712efe89-0ca3-4af0-8a5a-a148333a74b8\") " pod="kube-system/kube-proxy-lr62h" Mar 2 13:05:44.108061 kubelet[2554]: I0302 13:05:44.108030 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h297x\" (UniqueName: \"kubernetes.io/projected/712efe89-0ca3-4af0-8a5a-a148333a74b8-kube-api-access-h297x\") pod \"kube-proxy-lr62h\" (UID: \"712efe89-0ca3-4af0-8a5a-a148333a74b8\") " pod="kube-system/kube-proxy-lr62h" Mar 2 13:05:44.108398 kubelet[2554]: I0302 13:05:44.108220 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-etc-cni-netd\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.108668 kubelet[2554]: I0302 13:05:44.108540 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-bpf-maps\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.108991 kubelet[2554]: I0302 13:05:44.108905 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hostproc\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.109157 kubelet[2554]: I0302 13:05:44.109104 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-xtables-lock\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.109308 kubelet[2554]: I0302 13:05:44.109291 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c90a5e1-8c29-42d3-add8-83bd26f96c60-clustermesh-secrets\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.109539 kubelet[2554]: I0302 13:05:44.109472 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-config-path\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.109663 kubelet[2554]: I0302 13:05:44.109647 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cni-path\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.144038 kubelet[2554]: E0302 13:05:44.143328 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.211058 kubelet[2554]: I0302 13:05:44.210901 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-net\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.211058 kubelet[2554]: I0302 13:05:44.211052 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hubble-tls\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.211319 kubelet[2554]: I0302 13:05:44.211083 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdswl\" (UniqueName: \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-kube-api-access-cdswl\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.211319 kubelet[2554]: I0302 13:05:44.211144 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-kernel\") pod \"cilium-6xrrq\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " pod="kube-system/cilium-6xrrq" Mar 2 13:05:44.211319 kubelet[2554]: I0302 13:05:44.211172 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkkcs\" (UniqueName: \"kubernetes.io/projected/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-kube-api-access-lkkcs\") pod \"cilium-operator-78cf5644cb-76xps\" (UID: \"a8a2edeb-2d52-4286-9ff3-468cffdbf3df\") " pod="kube-system/cilium-operator-78cf5644cb-76xps" Mar 2 13:05:44.211319 kubelet[2554]: I0302 13:05:44.211244 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-cilium-config-path\") pod \"cilium-operator-78cf5644cb-76xps\" (UID: \"a8a2edeb-2d52-4286-9ff3-468cffdbf3df\") " pod="kube-system/cilium-operator-78cf5644cb-76xps" Mar 2 13:05:44.366350 kubelet[2554]: E0302 13:05:44.366175 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.367271 containerd[1469]: time="2026-03-02T13:05:44.367201661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr62h,Uid:712efe89-0ca3-4af0-8a5a-a148333a74b8,Namespace:kube-system,Attempt:0,}" Mar 2 13:05:44.376008 kubelet[2554]: E0302 13:05:44.375896 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.376835 containerd[1469]: time="2026-03-02T13:05:44.376665355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xrrq,Uid:5c90a5e1-8c29-42d3-add8-83bd26f96c60,Namespace:kube-system,Attempt:0,}" Mar 2 13:05:44.411080 kubelet[2554]: E0302 13:05:44.410984 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.412294 containerd[1469]: time="2026-03-02T13:05:44.412159182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-76xps,Uid:a8a2edeb-2d52-4286-9ff3-468cffdbf3df,Namespace:kube-system,Attempt:0,}" Mar 2 13:05:44.437974 containerd[1469]: time="2026-03-02T13:05:44.437625374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:05:44.437974 containerd[1469]: time="2026-03-02T13:05:44.437828277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:05:44.437974 containerd[1469]: time="2026-03-02T13:05:44.437844728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:44.438295 containerd[1469]: time="2026-03-02T13:05:44.438048943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:44.443566 containerd[1469]: time="2026-03-02T13:05:44.443388617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:05:44.443872 containerd[1469]: time="2026-03-02T13:05:44.443763010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:05:44.445066 containerd[1469]: time="2026-03-02T13:05:44.443852001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:44.446436 containerd[1469]: time="2026-03-02T13:05:44.446188751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:44.462095 containerd[1469]: time="2026-03-02T13:05:44.461787396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:05:44.462095 containerd[1469]: time="2026-03-02T13:05:44.461894914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:05:44.462095 containerd[1469]: time="2026-03-02T13:05:44.461910854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:44.462095 containerd[1469]: time="2026-03-02T13:05:44.462048430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:05:44.487166 systemd[1]: Started cri-containerd-5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120.scope - libcontainer container 5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120. Mar 2 13:05:44.501221 systemd[1]: Started cri-containerd-8d29ad62a8fa8dbf01cb9886b6ea28b270106242a554958c81d06481bd9b799c.scope - libcontainer container 8d29ad62a8fa8dbf01cb9886b6ea28b270106242a554958c81d06481bd9b799c. Mar 2 13:05:44.520137 systemd[1]: Started cri-containerd-2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf.scope - libcontainer container 2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf. Mar 2 13:05:44.552048 containerd[1469]: time="2026-03-02T13:05:44.552011825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xrrq,Uid:5c90a5e1-8c29-42d3-add8-83bd26f96c60,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\"" Mar 2 13:05:44.554311 kubelet[2554]: E0302 13:05:44.553103 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.556920 containerd[1469]: time="2026-03-02T13:05:44.556844887Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:05:44.569647 containerd[1469]: time="2026-03-02T13:05:44.569600954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr62h,Uid:712efe89-0ca3-4af0-8a5a-a148333a74b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d29ad62a8fa8dbf01cb9886b6ea28b270106242a554958c81d06481bd9b799c\"" Mar 2 13:05:44.570585 kubelet[2554]: E0302 13:05:44.570536 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.580890 containerd[1469]: time="2026-03-02T13:05:44.580682349Z" level=info msg="CreateContainer within sandbox \"8d29ad62a8fa8dbf01cb9886b6ea28b270106242a554958c81d06481bd9b799c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:05:44.606042 containerd[1469]: time="2026-03-02T13:05:44.605316487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-76xps,Uid:a8a2edeb-2d52-4286-9ff3-468cffdbf3df,Namespace:kube-system,Attempt:0,} returns sandbox id \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\"" Mar 2 13:05:44.608227 containerd[1469]: time="2026-03-02T13:05:44.608134637Z" level=info msg="CreateContainer within sandbox \"8d29ad62a8fa8dbf01cb9886b6ea28b270106242a554958c81d06481bd9b799c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b295418e0f4708812a70821077ee1de771d88fbb0141b64cfce20803f42f39b2\"" Mar 2 13:05:44.610135 containerd[1469]: time="2026-03-02T13:05:44.609972521Z" level=info msg="StartContainer for \"b295418e0f4708812a70821077ee1de771d88fbb0141b64cfce20803f42f39b2\"" Mar 2 13:05:44.610648 kubelet[2554]: E0302 13:05:44.610462 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:44.654231 systemd[1]: Started cri-containerd-b295418e0f4708812a70821077ee1de771d88fbb0141b64cfce20803f42f39b2.scope - libcontainer container b295418e0f4708812a70821077ee1de771d88fbb0141b64cfce20803f42f39b2. Mar 2 13:05:44.700550 containerd[1469]: time="2026-03-02T13:05:44.699550226Z" level=info msg="StartContainer for \"b295418e0f4708812a70821077ee1de771d88fbb0141b64cfce20803f42f39b2\" returns successfully" Mar 2 13:05:44.748069 kubelet[2554]: E0302 13:05:44.748016 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:52.174888 kubelet[2554]: E0302 13:05:52.170109 2554 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.362s" Mar 2 13:05:53.611285 kubelet[2554]: E0302 13:05:53.605705 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:53.903909 kubelet[2554]: I0302 13:05:53.902907 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-lr62h" podStartSLOduration=10.902886427 podStartE2EDuration="10.902886427s" podCreationTimestamp="2026-03-02 13:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:05:44.762684512 +0000 UTC m=+6.256788691" watchObservedRunningTime="2026-03-02 13:05:53.902886427 +0000 UTC m=+15.396990605" Mar 2 13:05:53.906322 kubelet[2554]: E0302 13:05:53.906244 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:05:54.270170 kubelet[2554]: E0302 13:05:54.267067 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:01.612694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328518920.mount: Deactivated successfully. Mar 2 13:06:03.460185 containerd[1469]: time="2026-03-02T13:06:03.456303671Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:03.460185 containerd[1469]: time="2026-03-02T13:06:03.458818017Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:06:03.460185 containerd[1469]: time="2026-03-02T13:06:03.459767726Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:03.461987 containerd[1469]: time="2026-03-02T13:06:03.461841715Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.904938797s" Mar 2 13:06:03.461987 containerd[1469]: time="2026-03-02T13:06:03.461883935Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:06:03.465293 containerd[1469]: time="2026-03-02T13:06:03.465222126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:06:03.471512 containerd[1469]: time="2026-03-02T13:06:03.471421712Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:06:03.492253 containerd[1469]: time="2026-03-02T13:06:03.492148541Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d\"" Mar 2 13:06:03.493003 containerd[1469]: time="2026-03-02T13:06:03.492818071Z" level=info msg="StartContainer for \"42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d\"" Mar 2 13:06:03.558245 systemd[1]: Started cri-containerd-42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d.scope - libcontainer container 42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d. Mar 2 13:06:03.623161 containerd[1469]: time="2026-03-02T13:06:03.623022984Z" level=info msg="StartContainer for \"42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d\" returns successfully" Mar 2 13:06:03.638233 systemd[1]: cri-containerd-42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d.scope: Deactivated successfully. Mar 2 13:06:03.785572 containerd[1469]: time="2026-03-02T13:06:03.785330326Z" level=info msg="shim disconnected" id=42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d namespace=k8s.io Mar 2 13:06:03.785572 containerd[1469]: time="2026-03-02T13:06:03.785532902Z" level=warning msg="cleaning up after shim disconnected" id=42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d namespace=k8s.io Mar 2 13:06:03.785572 containerd[1469]: time="2026-03-02T13:06:03.785568680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:06:03.979497 kubelet[2554]: E0302 13:06:03.979389 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:03.985862 containerd[1469]: time="2026-03-02T13:06:03.985760281Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:06:04.013047 containerd[1469]: time="2026-03-02T13:06:04.012503949Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17\"" Mar 2 13:06:04.015499 containerd[1469]: time="2026-03-02T13:06:04.015341320Z" level=info msg="StartContainer for \"9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17\"" Mar 2 13:06:04.077208 systemd[1]: Started cri-containerd-9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17.scope - libcontainer container 9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17. Mar 2 13:06:04.157084 containerd[1469]: time="2026-03-02T13:06:04.156909781Z" level=info msg="StartContainer for \"9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17\" returns successfully" Mar 2 13:06:04.176782 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:06:04.178309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:06:04.178554 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:06:04.188580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:06:04.189149 systemd[1]: cri-containerd-9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17.scope: Deactivated successfully. Mar 2 13:06:04.260841 containerd[1469]: time="2026-03-02T13:06:04.260692319Z" level=info msg="shim disconnected" id=9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17 namespace=k8s.io Mar 2 13:06:04.260841 containerd[1469]: time="2026-03-02T13:06:04.260785927Z" level=warning msg="cleaning up after shim disconnected" id=9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17 namespace=k8s.io Mar 2 13:06:04.260841 containerd[1469]: time="2026-03-02T13:06:04.260803681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:06:04.269180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:06:04.487583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d-rootfs.mount: Deactivated successfully. Mar 2 13:06:04.742526 containerd[1469]: time="2026-03-02T13:06:04.742293384Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:04.743368 containerd[1469]: time="2026-03-02T13:06:04.743237187Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:06:04.744805 containerd[1469]: time="2026-03-02T13:06:04.744726124Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:06:04.747720 containerd[1469]: time="2026-03-02T13:06:04.747594517Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.282339298s" Mar 2 13:06:04.747789 containerd[1469]: time="2026-03-02T13:06:04.747725005Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:06:04.756886 containerd[1469]: time="2026-03-02T13:06:04.756727661Z" level=info msg="CreateContainer within sandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:06:04.776188 containerd[1469]: time="2026-03-02T13:06:04.776096475Z" level=info msg="CreateContainer within sandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\"" Mar 2 13:06:04.777983 containerd[1469]: time="2026-03-02T13:06:04.777139588Z" level=info msg="StartContainer for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\"" Mar 2 13:06:04.838254 systemd[1]: Started cri-containerd-0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda.scope - libcontainer container 0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda. Mar 2 13:06:04.871915 containerd[1469]: time="2026-03-02T13:06:04.871815796Z" level=info msg="StartContainer for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" returns successfully" Mar 2 13:06:04.985354 kubelet[2554]: E0302 13:06:04.985232 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:04.994098 kubelet[2554]: E0302 13:06:04.993888 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:05.003420 containerd[1469]: time="2026-03-02T13:06:05.003327233Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:06:05.087120 containerd[1469]: time="2026-03-02T13:06:05.086993484Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7\"" Mar 2 13:06:05.089068 containerd[1469]: time="2026-03-02T13:06:05.089008731Z" level=info msg="StartContainer for \"e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7\"" Mar 2 13:06:05.093714 kubelet[2554]: I0302 13:06:05.093562 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-76xps" podStartSLOduration=0.95578899 podStartE2EDuration="21.093544041s" podCreationTimestamp="2026-03-02 13:05:44 +0000 UTC" firstStartedPulling="2026-03-02 13:05:44.611293714 +0000 UTC m=+6.105397892" lastFinishedPulling="2026-03-02 13:06:04.749048764 +0000 UTC m=+26.243152943" observedRunningTime="2026-03-02 13:06:05.03227879 +0000 UTC m=+26.526382979" watchObservedRunningTime="2026-03-02 13:06:05.093544041 +0000 UTC m=+26.587648241" Mar 2 13:06:05.167253 systemd[1]: Started cri-containerd-e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7.scope - libcontainer container e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7. Mar 2 13:06:05.342847 systemd[1]: cri-containerd-e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7.scope: Deactivated successfully. Mar 2 13:06:05.347054 containerd[1469]: time="2026-03-02T13:06:05.345302396Z" level=info msg="StartContainer for \"e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7\" returns successfully" Mar 2 13:06:05.444207 containerd[1469]: time="2026-03-02T13:06:05.443496411Z" level=info msg="shim disconnected" id=e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7 namespace=k8s.io Mar 2 13:06:05.444207 containerd[1469]: time="2026-03-02T13:06:05.443648780Z" level=warning msg="cleaning up after shim disconnected" id=e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7 namespace=k8s.io Mar 2 13:06:05.444207 containerd[1469]: time="2026-03-02T13:06:05.443665121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:06:06.005338 kubelet[2554]: E0302 13:06:06.005180 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:06.006267 kubelet[2554]: E0302 13:06:06.006193 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:06.022041 containerd[1469]: time="2026-03-02T13:06:06.021865117Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:06:06.056259 containerd[1469]: time="2026-03-02T13:06:06.055825643Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9\"" Mar 2 13:06:06.065186 containerd[1469]: time="2026-03-02T13:06:06.065099300Z" level=info msg="StartContainer for \"681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9\"" Mar 2 13:06:06.240307 systemd[1]: Started cri-containerd-681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9.scope - libcontainer container 681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9. Mar 2 13:06:06.291272 systemd[1]: cri-containerd-681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9.scope: Deactivated successfully. Mar 2 13:06:06.294470 containerd[1469]: time="2026-03-02T13:06:06.294400613Z" level=info msg="StartContainer for \"681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9\" returns successfully" Mar 2 13:06:06.338100 containerd[1469]: time="2026-03-02T13:06:06.337568839Z" level=info msg="shim disconnected" id=681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9 namespace=k8s.io Mar 2 13:06:06.338100 containerd[1469]: time="2026-03-02T13:06:06.337642930Z" level=warning msg="cleaning up after shim disconnected" id=681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9 namespace=k8s.io Mar 2 13:06:06.338100 containerd[1469]: time="2026-03-02T13:06:06.337654933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:06:06.487707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9-rootfs.mount: Deactivated successfully. Mar 2 13:06:07.011268 kubelet[2554]: E0302 13:06:07.011149 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:07.017790 containerd[1469]: time="2026-03-02T13:06:07.017676748Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:06:07.042322 containerd[1469]: time="2026-03-02T13:06:07.042256758Z" level=info msg="CreateContainer within sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\"" Mar 2 13:06:07.043202 containerd[1469]: time="2026-03-02T13:06:07.043149561Z" level=info msg="StartContainer for \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\"" Mar 2 13:06:07.078135 systemd[1]: Started cri-containerd-83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435.scope - libcontainer container 83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435. Mar 2 13:06:07.128082 containerd[1469]: time="2026-03-02T13:06:07.127876799Z" level=info msg="StartContainer for \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\" returns successfully" Mar 2 13:06:07.346731 kubelet[2554]: I0302 13:06:07.346548 2554 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 2 13:06:07.398015 systemd[1]: Created slice kubepods-burstable-pod12dfd643_3752_44c6_8b4e_1b7955ff24d0.slice - libcontainer container kubepods-burstable-pod12dfd643_3752_44c6_8b4e_1b7955ff24d0.slice. Mar 2 13:06:07.409734 systemd[1]: Created slice kubepods-burstable-podef3880c1_6f9a_49f1_9408_632fdbc76a66.slice - libcontainer container kubepods-burstable-podef3880c1_6f9a_49f1_9408_632fdbc76a66.slice. Mar 2 13:06:07.481216 kubelet[2554]: I0302 13:06:07.481107 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmtv\" (UniqueName: \"kubernetes.io/projected/ef3880c1-6f9a-49f1-9408-632fdbc76a66-kube-api-access-jhmtv\") pod \"coredns-7d764666f9-9nd9s\" (UID: \"ef3880c1-6f9a-49f1-9408-632fdbc76a66\") " pod="kube-system/coredns-7d764666f9-9nd9s" Mar 2 13:06:07.481415 kubelet[2554]: I0302 13:06:07.481218 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12dfd643-3752-44c6-8b4e-1b7955ff24d0-config-volume\") pod \"coredns-7d764666f9-j7nng\" (UID: \"12dfd643-3752-44c6-8b4e-1b7955ff24d0\") " pod="kube-system/coredns-7d764666f9-j7nng" Mar 2 13:06:07.481415 kubelet[2554]: I0302 13:06:07.481295 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcw2\" (UniqueName: \"kubernetes.io/projected/12dfd643-3752-44c6-8b4e-1b7955ff24d0-kube-api-access-hgcw2\") pod \"coredns-7d764666f9-j7nng\" (UID: \"12dfd643-3752-44c6-8b4e-1b7955ff24d0\") " pod="kube-system/coredns-7d764666f9-j7nng" Mar 2 13:06:07.481415 kubelet[2554]: I0302 13:06:07.481310 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef3880c1-6f9a-49f1-9408-632fdbc76a66-config-volume\") pod \"coredns-7d764666f9-9nd9s\" (UID: \"ef3880c1-6f9a-49f1-9408-632fdbc76a66\") " pod="kube-system/coredns-7d764666f9-9nd9s" Mar 2 13:06:07.706328 kubelet[2554]: E0302 13:06:07.706227 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:07.722048 kubelet[2554]: E0302 13:06:07.719294 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:07.722204 containerd[1469]: time="2026-03-02T13:06:07.720339793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9nd9s,Uid:ef3880c1-6f9a-49f1-9408-632fdbc76a66,Namespace:kube-system,Attempt:0,}" Mar 2 13:06:07.732836 containerd[1469]: time="2026-03-02T13:06:07.732106470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j7nng,Uid:12dfd643-3752-44c6-8b4e-1b7955ff24d0,Namespace:kube-system,Attempt:0,}" Mar 2 13:06:08.021440 kubelet[2554]: E0302 13:06:08.021177 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:08.041423 kubelet[2554]: I0302 13:06:08.041353 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-6xrrq" podStartSLOduration=2.584669392 podStartE2EDuration="25.041338036s" podCreationTimestamp="2026-03-02 13:05:43 +0000 UTC" firstStartedPulling="2026-03-02 13:05:44.555894158 +0000 UTC m=+6.049998348" lastFinishedPulling="2026-03-02 13:06:07.012562794 +0000 UTC m=+28.506666992" observedRunningTime="2026-03-02 13:06:08.040659171 +0000 UTC m=+29.534763380" watchObservedRunningTime="2026-03-02 13:06:08.041338036 +0000 UTC m=+29.535442216" Mar 2 13:06:09.023890 kubelet[2554]: E0302 13:06:09.023772 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:09.517676 systemd-networkd[1399]: cilium_host: Link UP Mar 2 13:06:09.520722 systemd-networkd[1399]: cilium_net: Link UP Mar 2 13:06:09.521644 systemd-networkd[1399]: cilium_net: Gained carrier Mar 2 13:06:09.522617 systemd-networkd[1399]: cilium_host: Gained carrier Mar 2 13:06:09.658487 systemd-networkd[1399]: cilium_vxlan: Link UP Mar 2 13:06:09.659461 systemd-networkd[1399]: cilium_vxlan: Gained carrier Mar 2 13:06:09.664461 systemd-networkd[1399]: cilium_host: Gained IPv6LL Mar 2 13:06:09.924020 kernel: NET: Registered PF_ALG protocol family Mar 2 13:06:10.025738 kubelet[2554]: E0302 13:06:10.025572 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:10.063230 systemd-networkd[1399]: cilium_net: Gained IPv6LL Mar 2 13:06:10.809423 systemd-networkd[1399]: lxc_health: Link UP Mar 2 13:06:10.825344 systemd-networkd[1399]: lxc_health: Gained carrier Mar 2 13:06:11.024211 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Mar 2 13:06:11.324663 systemd-networkd[1399]: lxc7878ac5583d3: Link UP Mar 2 13:06:11.334082 kernel: eth0: renamed from tmpd3b24 Mar 2 13:06:11.345339 systemd-networkd[1399]: lxc7878ac5583d3: Gained carrier Mar 2 13:06:11.348372 systemd-networkd[1399]: lxc7758d0f4cb87: Link UP Mar 2 13:06:11.356989 kernel: eth0: renamed from tmp2db2d Mar 2 13:06:11.361792 systemd-networkd[1399]: lxc7758d0f4cb87: Gained carrier Mar 2 13:06:12.375010 kubelet[2554]: E0302 13:06:12.374841 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:12.752172 systemd-networkd[1399]: lxc_health: Gained IPv6LL Mar 2 13:06:13.071181 kubelet[2554]: E0302 13:06:13.070991 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:13.199222 systemd-networkd[1399]: lxc7878ac5583d3: Gained IPv6LL Mar 2 13:06:13.391220 systemd-networkd[1399]: lxc7758d0f4cb87: Gained IPv6LL Mar 2 13:06:14.073393 kubelet[2554]: E0302 13:06:14.073303 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:14.814382 containerd[1469]: time="2026-03-02T13:06:14.812869571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:06:14.814382 containerd[1469]: time="2026-03-02T13:06:14.814077267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:06:14.814382 containerd[1469]: time="2026-03-02T13:06:14.814089260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:06:14.814382 containerd[1469]: time="2026-03-02T13:06:14.814215149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:06:14.819821 containerd[1469]: time="2026-03-02T13:06:14.819474834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:06:14.819821 containerd[1469]: time="2026-03-02T13:06:14.819537282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:06:14.819821 containerd[1469]: time="2026-03-02T13:06:14.819562530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:06:14.819821 containerd[1469]: time="2026-03-02T13:06:14.819676857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:06:14.842350 systemd[1]: run-containerd-runc-k8s.io-2db2d3d3a86d70cd7b393dd2ae40565795eaba45c898b7164f6744caf86063be-runc.x9A0sU.mount: Deactivated successfully. Mar 2 13:06:14.853151 systemd[1]: Started cri-containerd-2db2d3d3a86d70cd7b393dd2ae40565795eaba45c898b7164f6744caf86063be.scope - libcontainer container 2db2d3d3a86d70cd7b393dd2ae40565795eaba45c898b7164f6744caf86063be. Mar 2 13:06:14.857210 systemd[1]: Started cri-containerd-d3b24f2271cae1eab4ca0f20c94cd253ab419cded917afc75030e92495bb939a.scope - libcontainer container d3b24f2271cae1eab4ca0f20c94cd253ab419cded917afc75030e92495bb939a. Mar 2 13:06:14.873184 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:06:14.873201 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:06:14.902505 containerd[1469]: time="2026-03-02T13:06:14.902177178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j7nng,Uid:12dfd643-3752-44c6-8b4e-1b7955ff24d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2db2d3d3a86d70cd7b393dd2ae40565795eaba45c898b7164f6744caf86063be\"" Mar 2 13:06:14.904101 kubelet[2554]: E0302 13:06:14.903445 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:14.912817 containerd[1469]: time="2026-03-02T13:06:14.912747410Z" level=info msg="CreateContainer within sandbox \"2db2d3d3a86d70cd7b393dd2ae40565795eaba45c898b7164f6744caf86063be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:06:14.921438 containerd[1469]: time="2026-03-02T13:06:14.921279551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9nd9s,Uid:ef3880c1-6f9a-49f1-9408-632fdbc76a66,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3b24f2271cae1eab4ca0f20c94cd253ab419cded917afc75030e92495bb939a\"" Mar 2 13:06:14.922630 kubelet[2554]: E0302 13:06:14.922547 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:14.932543 containerd[1469]: time="2026-03-02T13:06:14.932471404Z" level=info msg="CreateContainer within sandbox \"d3b24f2271cae1eab4ca0f20c94cd253ab419cded917afc75030e92495bb939a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:06:14.939906 containerd[1469]: time="2026-03-02T13:06:14.939230881Z" level=info msg="CreateContainer within sandbox \"2db2d3d3a86d70cd7b393dd2ae40565795eaba45c898b7164f6744caf86063be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d432e7e0aec18ab7f4cf810acbc00cc71b88ce1939cbec8b94fcb642e674301\"" Mar 2 13:06:14.940297 containerd[1469]: time="2026-03-02T13:06:14.940234370Z" level=info msg="StartContainer for \"6d432e7e0aec18ab7f4cf810acbc00cc71b88ce1939cbec8b94fcb642e674301\"" Mar 2 13:06:14.961219 containerd[1469]: time="2026-03-02T13:06:14.961128996Z" level=info msg="CreateContainer within sandbox \"d3b24f2271cae1eab4ca0f20c94cd253ab419cded917afc75030e92495bb939a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eeece287774ca63700e4d1a21720d440d59ce5ddeb5395a729f67f84b435f0b8\"" Mar 2 13:06:14.962112 containerd[1469]: time="2026-03-02T13:06:14.962044639Z" level=info msg="StartContainer for \"eeece287774ca63700e4d1a21720d440d59ce5ddeb5395a729f67f84b435f0b8\"" Mar 2 13:06:14.982518 systemd[1]: Started cri-containerd-6d432e7e0aec18ab7f4cf810acbc00cc71b88ce1939cbec8b94fcb642e674301.scope - libcontainer container 6d432e7e0aec18ab7f4cf810acbc00cc71b88ce1939cbec8b94fcb642e674301. Mar 2 13:06:15.009217 systemd[1]: Started cri-containerd-eeece287774ca63700e4d1a21720d440d59ce5ddeb5395a729f67f84b435f0b8.scope - libcontainer container eeece287774ca63700e4d1a21720d440d59ce5ddeb5395a729f67f84b435f0b8. Mar 2 13:06:15.025058 containerd[1469]: time="2026-03-02T13:06:15.024800731Z" level=info msg="StartContainer for \"6d432e7e0aec18ab7f4cf810acbc00cc71b88ce1939cbec8b94fcb642e674301\" returns successfully" Mar 2 13:06:15.045756 containerd[1469]: time="2026-03-02T13:06:15.045717547Z" level=info msg="StartContainer for \"eeece287774ca63700e4d1a21720d440d59ce5ddeb5395a729f67f84b435f0b8\" returns successfully" Mar 2 13:06:15.078703 kubelet[2554]: E0302 13:06:15.078437 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:15.085851 kubelet[2554]: E0302 13:06:15.085795 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:15.130241 kubelet[2554]: I0302 13:06:15.129745 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-j7nng" podStartSLOduration=31.129722639 podStartE2EDuration="31.129722639s" podCreationTimestamp="2026-03-02 13:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:06:15.128599895 +0000 UTC m=+36.622704103" watchObservedRunningTime="2026-03-02 13:06:15.129722639 +0000 UTC m=+36.623826818" Mar 2 13:06:15.130241 kubelet[2554]: I0302 13:06:15.129892 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9nd9s" podStartSLOduration=31.129880368 podStartE2EDuration="31.129880368s" podCreationTimestamp="2026-03-02 13:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:06:15.104518824 +0000 UTC m=+36.598623033" watchObservedRunningTime="2026-03-02 13:06:15.129880368 +0000 UTC m=+36.623984557" Mar 2 13:06:15.822484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580382455.mount: Deactivated successfully. Mar 2 13:06:16.088707 kubelet[2554]: E0302 13:06:16.088395 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:16.088707 kubelet[2554]: E0302 13:06:16.088508 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:17.090688 kubelet[2554]: E0302 13:06:17.090564 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:17.091268 kubelet[2554]: E0302 13:06:17.090705 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:19.055328 systemd[1]: Started sshd@7-10.0.0.87:22-10.0.0.1:53530.service - OpenSSH per-connection server daemon (10.0.0.1:53530). Mar 2 13:06:19.118596 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:19.121178 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:19.128415 systemd-logind[1448]: New session 8 of user core. Mar 2 13:06:19.139147 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:06:19.294606 sshd[3955]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:19.300268 systemd[1]: sshd@7-10.0.0.87:22-10.0.0.1:53530.service: Deactivated successfully. Mar 2 13:06:19.302293 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:06:19.303235 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:06:19.304595 systemd-logind[1448]: Removed session 8. Mar 2 13:06:24.311465 systemd[1]: Started sshd@8-10.0.0.87:22-10.0.0.1:33106.service - OpenSSH per-connection server daemon (10.0.0.1:33106). Mar 2 13:06:24.372013 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 33106 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:24.374366 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:24.381072 systemd-logind[1448]: New session 9 of user core. Mar 2 13:06:24.396188 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:06:24.556836 sshd[3978]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:24.561900 systemd[1]: sshd@8-10.0.0.87:22-10.0.0.1:33106.service: Deactivated successfully. Mar 2 13:06:24.565676 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:06:24.567708 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:06:24.569363 systemd-logind[1448]: Removed session 9. Mar 2 13:06:29.571052 systemd[1]: Started sshd@9-10.0.0.87:22-10.0.0.1:33112.service - OpenSSH per-connection server daemon (10.0.0.1:33112). Mar 2 13:06:29.632218 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 33112 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:29.635167 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:29.642623 systemd-logind[1448]: New session 10 of user core. Mar 2 13:06:29.653353 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:06:29.840611 sshd[3993]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:29.846456 systemd[1]: sshd@9-10.0.0.87:22-10.0.0.1:33112.service: Deactivated successfully. Mar 2 13:06:29.849348 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:06:29.850645 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:06:29.852652 systemd-logind[1448]: Removed session 10. Mar 2 13:06:34.852695 systemd[1]: Started sshd@10-10.0.0.87:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Mar 2 13:06:34.899955 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:34.901800 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:34.908787 systemd-logind[1448]: New session 11 of user core. Mar 2 13:06:34.912453 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:06:35.028225 sshd[4008]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:35.032584 systemd[1]: sshd@10-10.0.0.87:22-10.0.0.1:59954.service: Deactivated successfully. Mar 2 13:06:35.034825 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:06:35.035796 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:06:35.037468 systemd-logind[1448]: Removed session 11. Mar 2 13:06:40.075323 systemd[1]: Started sshd@11-10.0.0.87:22-10.0.0.1:59968.service - OpenSSH per-connection server daemon (10.0.0.1:59968). Mar 2 13:06:40.154594 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 59968 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:40.154640 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:40.159692 systemd-logind[1448]: New session 12 of user core. Mar 2 13:06:40.166128 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:06:40.429165 sshd[4025]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:40.440358 systemd[1]: sshd@11-10.0.0.87:22-10.0.0.1:59968.service: Deactivated successfully. Mar 2 13:06:40.442230 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:06:40.444722 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:06:40.446382 systemd[1]: Started sshd@12-10.0.0.87:22-10.0.0.1:59970.service - OpenSSH per-connection server daemon (10.0.0.1:59970). Mar 2 13:06:40.481799 systemd-logind[1448]: Removed session 12. Mar 2 13:06:40.577098 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 59970 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:40.578649 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:40.605137 systemd-logind[1448]: New session 13 of user core. Mar 2 13:06:40.617106 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:06:40.868320 sshd[4040]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:40.877693 systemd[1]: sshd@12-10.0.0.87:22-10.0.0.1:59970.service: Deactivated successfully. Mar 2 13:06:40.881285 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:06:40.883748 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:06:40.893810 systemd[1]: Started sshd@13-10.0.0.87:22-10.0.0.1:59976.service - OpenSSH per-connection server daemon (10.0.0.1:59976). Mar 2 13:06:40.895920 systemd-logind[1448]: Removed session 13. Mar 2 13:06:40.929691 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 59976 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:40.931591 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:40.937663 systemd-logind[1448]: New session 14 of user core. Mar 2 13:06:40.945241 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:06:41.053737 sshd[4052]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:41.058051 systemd[1]: sshd@13-10.0.0.87:22-10.0.0.1:59976.service: Deactivated successfully. Mar 2 13:06:41.060186 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:06:41.060843 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:06:41.062440 systemd-logind[1448]: Removed session 14. Mar 2 13:06:45.700010 kubelet[2554]: E0302 13:06:45.699849 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:46.069212 systemd[1]: Started sshd@14-10.0.0.87:22-10.0.0.1:54314.service - OpenSSH per-connection server daemon (10.0.0.1:54314). Mar 2 13:06:46.110712 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 54314 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:46.113310 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:46.122554 systemd-logind[1448]: New session 15 of user core. Mar 2 13:06:46.129217 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:06:46.266355 sshd[4069]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:46.270458 systemd[1]: sshd@14-10.0.0.87:22-10.0.0.1:54314.service: Deactivated successfully. Mar 2 13:06:46.272494 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:06:46.274677 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:06:46.276338 systemd-logind[1448]: Removed session 15. Mar 2 13:06:51.279099 systemd[1]: Started sshd@15-10.0.0.87:22-10.0.0.1:54328.service - OpenSSH per-connection server daemon (10.0.0.1:54328). Mar 2 13:06:51.319674 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 54328 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:51.321844 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:51.327994 systemd-logind[1448]: New session 16 of user core. Mar 2 13:06:51.333165 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:06:51.478099 sshd[4083]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:51.482690 systemd[1]: sshd@15-10.0.0.87:22-10.0.0.1:54328.service: Deactivated successfully. Mar 2 13:06:51.485307 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:06:51.486379 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:06:51.487827 systemd-logind[1448]: Removed session 16. Mar 2 13:06:56.491906 systemd[1]: Started sshd@16-10.0.0.87:22-10.0.0.1:51052.service - OpenSSH per-connection server daemon (10.0.0.1:51052). Mar 2 13:06:56.554784 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 51052 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:56.557568 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:56.563695 systemd-logind[1448]: New session 17 of user core. Mar 2 13:06:56.574165 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:06:56.711040 sshd[4098]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:56.722821 systemd[1]: sshd@16-10.0.0.87:22-10.0.0.1:51052.service: Deactivated successfully. Mar 2 13:06:56.726166 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:06:56.728820 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:06:56.747767 systemd[1]: Started sshd@17-10.0.0.87:22-10.0.0.1:51054.service - OpenSSH per-connection server daemon (10.0.0.1:51054). Mar 2 13:06:56.749748 systemd-logind[1448]: Removed session 17. Mar 2 13:06:56.789136 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 51054 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:56.790811 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:56.797985 systemd-logind[1448]: New session 18 of user core. Mar 2 13:06:56.804260 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:06:57.210044 sshd[4112]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:57.237414 systemd[1]: sshd@17-10.0.0.87:22-10.0.0.1:51054.service: Deactivated successfully. Mar 2 13:06:57.240103 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:06:57.242483 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:06:57.252445 systemd[1]: Started sshd@18-10.0.0.87:22-10.0.0.1:51064.service - OpenSSH per-connection server daemon (10.0.0.1:51064). Mar 2 13:06:57.253643 systemd-logind[1448]: Removed session 18. Mar 2 13:06:57.300440 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 51064 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:57.303342 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:57.311389 systemd-logind[1448]: New session 19 of user core. Mar 2 13:06:57.330762 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:06:58.042501 sshd[4125]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:58.055283 systemd[1]: sshd@18-10.0.0.87:22-10.0.0.1:51064.service: Deactivated successfully. Mar 2 13:06:58.058640 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:06:58.060996 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:06:58.076024 systemd[1]: Started sshd@19-10.0.0.87:22-10.0.0.1:51070.service - OpenSSH per-connection server daemon (10.0.0.1:51070). Mar 2 13:06:58.078186 systemd-logind[1448]: Removed session 19. Mar 2 13:06:58.118716 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 51070 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:58.120872 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:58.128020 systemd-logind[1448]: New session 20 of user core. Mar 2 13:06:58.139182 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:06:58.462543 sshd[4142]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:58.475437 systemd[1]: sshd@19-10.0.0.87:22-10.0.0.1:51070.service: Deactivated successfully. Mar 2 13:06:58.479789 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:06:58.483068 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:06:58.497527 systemd[1]: Started sshd@20-10.0.0.87:22-10.0.0.1:51084.service - OpenSSH per-connection server daemon (10.0.0.1:51084). Mar 2 13:06:58.499055 systemd-logind[1448]: Removed session 20. Mar 2 13:06:58.546241 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 51084 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:06:58.549004 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:58.556717 systemd-logind[1448]: New session 21 of user core. Mar 2 13:06:58.564175 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:06:58.716440 sshd[4155]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:58.730720 systemd[1]: sshd@20-10.0.0.87:22-10.0.0.1:51084.service: Deactivated successfully. Mar 2 13:06:58.733329 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:06:58.735250 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:06:58.737094 systemd-logind[1448]: Removed session 21. Mar 2 13:06:59.699393 kubelet[2554]: E0302 13:06:59.699275 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:03.746081 systemd[1]: Started sshd@21-10.0.0.87:22-10.0.0.1:56110.service - OpenSSH per-connection server daemon (10.0.0.1:56110). Mar 2 13:07:03.797264 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 56110 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:03.799108 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:03.806718 systemd-logind[1448]: New session 22 of user core. Mar 2 13:07:03.816396 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:07:03.978047 sshd[4171]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:03.983627 systemd[1]: sshd@21-10.0.0.87:22-10.0.0.1:56110.service: Deactivated successfully. Mar 2 13:07:03.986719 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:07:03.989731 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:07:03.992348 systemd-logind[1448]: Removed session 22. Mar 2 13:07:08.996282 systemd[1]: Started sshd@22-10.0.0.87:22-10.0.0.1:56122.service - OpenSSH per-connection server daemon (10.0.0.1:56122). Mar 2 13:07:09.041972 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 56122 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:09.044174 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:09.051125 systemd-logind[1448]: New session 23 of user core. Mar 2 13:07:09.069196 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:07:09.190525 sshd[4187]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:09.195654 systemd[1]: sshd@22-10.0.0.87:22-10.0.0.1:56122.service: Deactivated successfully. Mar 2 13:07:09.197869 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:07:09.198710 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:07:09.200435 systemd-logind[1448]: Removed session 23. Mar 2 13:07:10.700244 kubelet[2554]: E0302 13:07:10.700196 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:14.223253 systemd[1]: Started sshd@23-10.0.0.87:22-10.0.0.1:38340.service - OpenSSH per-connection server daemon (10.0.0.1:38340). Mar 2 13:07:14.260893 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 38340 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:14.263886 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:14.271403 systemd-logind[1448]: New session 24 of user core. Mar 2 13:07:14.284360 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:07:14.426721 sshd[4202]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:14.431477 systemd[1]: sshd@23-10.0.0.87:22-10.0.0.1:38340.service: Deactivated successfully. Mar 2 13:07:14.434235 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:07:14.436826 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:07:14.438737 systemd-logind[1448]: Removed session 24. Mar 2 13:07:17.700481 kubelet[2554]: E0302 13:07:17.700390 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:18.699967 kubelet[2554]: E0302 13:07:18.699811 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:19.439978 systemd[1]: Started sshd@24-10.0.0.87:22-10.0.0.1:38346.service - OpenSSH per-connection server daemon (10.0.0.1:38346). Mar 2 13:07:19.482285 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 38346 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:19.484604 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:19.490458 systemd-logind[1448]: New session 25 of user core. Mar 2 13:07:19.497189 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:07:19.619185 sshd[4218]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:19.628264 systemd[1]: sshd@24-10.0.0.87:22-10.0.0.1:38346.service: Deactivated successfully. Mar 2 13:07:19.631181 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:07:19.633576 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:07:19.642463 systemd[1]: Started sshd@25-10.0.0.87:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). Mar 2 13:07:19.643599 systemd-logind[1448]: Removed session 25. Mar 2 13:07:19.676857 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:19.678629 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:19.683855 systemd-logind[1448]: New session 26 of user core. Mar 2 13:07:19.697173 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:07:21.116706 containerd[1469]: time="2026-03-02T13:07:21.114249680Z" level=info msg="StopContainer for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" with timeout 30 (s)" Mar 2 13:07:21.131439 containerd[1469]: time="2026-03-02T13:07:21.131350047Z" level=info msg="Stop container \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" with signal terminated" Mar 2 13:07:21.169742 systemd[1]: run-containerd-runc-k8s.io-83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435-runc.TIStct.mount: Deactivated successfully. Mar 2 13:07:21.176166 systemd[1]: cri-containerd-0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda.scope: Deactivated successfully. Mar 2 13:07:21.204427 containerd[1469]: time="2026-03-02T13:07:21.204315017Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:07:21.210285 containerd[1469]: time="2026-03-02T13:07:21.210178710Z" level=info msg="StopContainer for \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\" with timeout 2 (s)" Mar 2 13:07:21.210855 containerd[1469]: time="2026-03-02T13:07:21.210825912Z" level=info msg="Stop container \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\" with signal terminated" Mar 2 13:07:21.224613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda-rootfs.mount: Deactivated successfully. Mar 2 13:07:21.228002 systemd-networkd[1399]: lxc_health: Link DOWN Mar 2 13:07:21.228016 systemd-networkd[1399]: lxc_health: Lost carrier Mar 2 13:07:21.242521 containerd[1469]: time="2026-03-02T13:07:21.242439523Z" level=info msg="shim disconnected" id=0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda namespace=k8s.io Mar 2 13:07:21.242521 containerd[1469]: time="2026-03-02T13:07:21.242514593Z" level=warning msg="cleaning up after shim disconnected" id=0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda namespace=k8s.io Mar 2 13:07:21.242818 containerd[1469]: time="2026-03-02T13:07:21.242528329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:21.262482 systemd[1]: cri-containerd-83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435.scope: Deactivated successfully. Mar 2 13:07:21.263227 systemd[1]: cri-containerd-83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435.scope: Consumed 8.505s CPU time. Mar 2 13:07:21.277854 containerd[1469]: time="2026-03-02T13:07:21.277737855Z" level=info msg="StopContainer for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" returns successfully" Mar 2 13:07:21.279157 containerd[1469]: time="2026-03-02T13:07:21.279012173Z" level=info msg="StopPodSandbox for \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\"" Mar 2 13:07:21.279157 containerd[1469]: time="2026-03-02T13:07:21.279093074Z" level=info msg="Container to stop \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:07:21.281238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf-shm.mount: Deactivated successfully. Mar 2 13:07:21.298407 systemd[1]: cri-containerd-2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf.scope: Deactivated successfully. Mar 2 13:07:21.308112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435-rootfs.mount: Deactivated successfully. Mar 2 13:07:21.327399 containerd[1469]: time="2026-03-02T13:07:21.327162695Z" level=info msg="shim disconnected" id=83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435 namespace=k8s.io Mar 2 13:07:21.327399 containerd[1469]: time="2026-03-02T13:07:21.327222898Z" level=warning msg="cleaning up after shim disconnected" id=83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435 namespace=k8s.io Mar 2 13:07:21.327399 containerd[1469]: time="2026-03-02T13:07:21.327233117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:21.343898 containerd[1469]: time="2026-03-02T13:07:21.343760134Z" level=info msg="shim disconnected" id=2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf namespace=k8s.io Mar 2 13:07:21.343898 containerd[1469]: time="2026-03-02T13:07:21.343852035Z" level=warning msg="cleaning up after shim disconnected" id=2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf namespace=k8s.io Mar 2 13:07:21.343898 containerd[1469]: time="2026-03-02T13:07:21.343867464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:21.355278 containerd[1469]: time="2026-03-02T13:07:21.355229953Z" level=info msg="StopContainer for \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\" returns successfully" Mar 2 13:07:21.356350 containerd[1469]: time="2026-03-02T13:07:21.356304033Z" level=info msg="StopPodSandbox for \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\"" Mar 2 13:07:21.356527 containerd[1469]: time="2026-03-02T13:07:21.356366291Z" level=info msg="Container to stop \"9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:07:21.356527 containerd[1469]: time="2026-03-02T13:07:21.356386498Z" level=info msg="Container to stop \"42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:07:21.356527 containerd[1469]: time="2026-03-02T13:07:21.356401105Z" level=info msg="Container to stop \"e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:07:21.356527 containerd[1469]: time="2026-03-02T13:07:21.356414982Z" level=info msg="Container to stop \"681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:07:21.356527 containerd[1469]: time="2026-03-02T13:07:21.356428978Z" level=info msg="Container to stop \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:07:21.366065 systemd[1]: cri-containerd-5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120.scope: Deactivated successfully. Mar 2 13:07:21.370905 containerd[1469]: time="2026-03-02T13:07:21.370562774Z" level=info msg="TearDown network for sandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" successfully" Mar 2 13:07:21.370905 containerd[1469]: time="2026-03-02T13:07:21.370592189Z" level=info msg="StopPodSandbox for \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" returns successfully" Mar 2 13:07:21.410201 containerd[1469]: time="2026-03-02T13:07:21.410100994Z" level=info msg="shim disconnected" id=5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120 namespace=k8s.io Mar 2 13:07:21.410574 containerd[1469]: time="2026-03-02T13:07:21.410190821Z" level=warning msg="cleaning up after shim disconnected" id=5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120 namespace=k8s.io Mar 2 13:07:21.410574 containerd[1469]: time="2026-03-02T13:07:21.410377481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:21.426810 kubelet[2554]: I0302 13:07:21.426353 2554 scope.go:122] "RemoveContainer" containerID="0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda" Mar 2 13:07:21.432718 containerd[1469]: time="2026-03-02T13:07:21.432640011Z" level=info msg="RemoveContainer for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\"" Mar 2 13:07:21.433298 containerd[1469]: time="2026-03-02T13:07:21.433186296Z" level=info msg="TearDown network for sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" successfully" Mar 2 13:07:21.433298 containerd[1469]: time="2026-03-02T13:07:21.433242801Z" level=info msg="StopPodSandbox for \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" returns successfully" Mar 2 13:07:21.439392 containerd[1469]: time="2026-03-02T13:07:21.439349463Z" level=info msg="RemoveContainer for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" returns successfully" Mar 2 13:07:21.439728 kubelet[2554]: I0302 13:07:21.439587 2554 scope.go:122] "RemoveContainer" containerID="0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda" Mar 2 13:07:21.446736 containerd[1469]: time="2026-03-02T13:07:21.446558534Z" level=error msg="ContainerStatus for \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\": not found" Mar 2 13:07:21.456452 kubelet[2554]: E0302 13:07:21.456298 2554 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\": not found" containerID="0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda" Mar 2 13:07:21.456452 kubelet[2554]: I0302 13:07:21.456388 2554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda"} err="failed to get container status \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a20a99b77c166022174b5f897bb7f5ca885adb363dd2656481d46a8a59ddcda\": not found" Mar 2 13:07:21.568143 kubelet[2554]: I0302 13:07:21.567155 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-kernel\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568143 kubelet[2554]: I0302 13:07:21.567205 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-lib-modules\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568143 kubelet[2554]: I0302 13:07:21.567224 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hostproc\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hostproc\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568143 kubelet[2554]: I0302 13:07:21.567247 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/5c90a5e1-8c29-42d3-add8-83bd26f96c60-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c90a5e1-8c29-42d3-add8-83bd26f96c60-clustermesh-secrets\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568143 kubelet[2554]: I0302 13:07:21.567264 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-etc-cni-netd\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568505 kubelet[2554]: I0302 13:07:21.567282 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hubble-tls\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568505 kubelet[2554]: I0302 13:07:21.567298 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-run\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568505 kubelet[2554]: I0302 13:07:21.567313 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cni-path\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cni-path\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568505 kubelet[2554]: I0302 13:07:21.567299 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-kernel" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.568505 kubelet[2554]: I0302 13:07:21.567349 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-cgroup" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.568681 kubelet[2554]: I0302 13:07:21.567342 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hostproc" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.568681 kubelet[2554]: I0302 13:07:21.567327 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-cgroup\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568681 kubelet[2554]: I0302 13:07:21.567423 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-kube-api-access-cdswl\" (UniqueName: \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-kube-api-access-cdswl\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568681 kubelet[2554]: I0302 13:07:21.567447 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-bpf-maps\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568681 kubelet[2554]: I0302 13:07:21.567462 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-xtables-lock\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568858 kubelet[2554]: I0302 13:07:21.567480 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-config-path\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568858 kubelet[2554]: I0302 13:07:21.567497 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-net\") pod \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\" (UID: \"5c90a5e1-8c29-42d3-add8-83bd26f96c60\") " Mar 2 13:07:21.568858 kubelet[2554]: I0302 13:07:21.567517 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-cilium-config-path\") pod \"a8a2edeb-2d52-4286-9ff3-468cffdbf3df\" (UID: \"a8a2edeb-2d52-4286-9ff3-468cffdbf3df\") " Mar 2 13:07:21.568858 kubelet[2554]: I0302 13:07:21.567535 2554 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-kube-api-access-lkkcs\" (UniqueName: \"kubernetes.io/projected/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-kube-api-access-lkkcs\") pod \"a8a2edeb-2d52-4286-9ff3-468cffdbf3df\" (UID: \"a8a2edeb-2d52-4286-9ff3-468cffdbf3df\") " Mar 2 13:07:21.568858 kubelet[2554]: I0302 13:07:21.567583 2554 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.568858 kubelet[2554]: I0302 13:07:21.567596 2554 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.569244 kubelet[2554]: I0302 13:07:21.567605 2554 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.569244 kubelet[2554]: I0302 13:07:21.567368 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-lib-modules" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.569244 kubelet[2554]: I0302 13:07:21.567385 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-run" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.569244 kubelet[2554]: I0302 13:07:21.567396 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cni-path" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.569244 kubelet[2554]: I0302 13:07:21.568235 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-etc-cni-netd" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.569244 kubelet[2554]: I0302 13:07:21.568263 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-xtables-lock" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.571409 kubelet[2554]: I0302 13:07:21.571372 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-net" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.572061 kubelet[2554]: I0302 13:07:21.571849 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-bpf-maps" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:07:21.573136 kubelet[2554]: I0302 13:07:21.573064 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hubble-tls" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:07:21.573809 kubelet[2554]: I0302 13:07:21.573722 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-kube-api-access-lkkcs" pod "a8a2edeb-2d52-4286-9ff3-468cffdbf3df" (UID: "a8a2edeb-2d52-4286-9ff3-468cffdbf3df"). InnerVolumeSpecName "kube-api-access-lkkcs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:07:21.574792 kubelet[2554]: I0302 13:07:21.574767 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-kube-api-access-cdswl" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "kube-api-access-cdswl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:07:21.577059 kubelet[2554]: I0302 13:07:21.577032 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-config-path" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:07:21.577653 kubelet[2554]: I0302 13:07:21.577589 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c90a5e1-8c29-42d3-add8-83bd26f96c60-clustermesh-secrets" pod "5c90a5e1-8c29-42d3-add8-83bd26f96c60" (UID: "5c90a5e1-8c29-42d3-add8-83bd26f96c60"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:07:21.578246 kubelet[2554]: I0302 13:07:21.578172 2554 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-cilium-config-path" pod "a8a2edeb-2d52-4286-9ff3-468cffdbf3df" (UID: "a8a2edeb-2d52-4286-9ff3-468cffdbf3df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:07:21.599423 kubelet[2554]: E0302 13:07:21.599368 2554 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667706 2554 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667758 2554 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkkcs\" (UniqueName: \"kubernetes.io/projected/a8a2edeb-2d52-4286-9ff3-468cffdbf3df-kube-api-access-lkkcs\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667770 2554 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667778 2554 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c90a5e1-8c29-42d3-add8-83bd26f96c60-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667789 2554 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667799 2554 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667807 2554 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.667807 kubelet[2554]: I0302 13:07:21.667815 2554 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.668235 kubelet[2554]: I0302 13:07:21.667827 2554 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdswl\" (UniqueName: \"kubernetes.io/projected/5c90a5e1-8c29-42d3-add8-83bd26f96c60-kube-api-access-cdswl\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.668235 kubelet[2554]: I0302 13:07:21.667835 2554 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.668235 kubelet[2554]: I0302 13:07:21.667842 2554 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.668235 kubelet[2554]: I0302 13:07:21.667852 2554 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c90a5e1-8c29-42d3-add8-83bd26f96c60-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.668235 kubelet[2554]: I0302 13:07:21.667860 2554 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c90a5e1-8c29-42d3-add8-83bd26f96c60-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 13:07:21.699442 kubelet[2554]: E0302 13:07:21.699349 2554 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-j7nng" podUID="12dfd643-3752-44c6-8b4e-1b7955ff24d0" Mar 2 13:07:21.736401 systemd[1]: Removed slice kubepods-besteffort-poda8a2edeb_2d52_4286_9ff3_468cffdbf3df.slice - libcontainer container kubepods-besteffort-poda8a2edeb_2d52_4286_9ff3_468cffdbf3df.slice. Mar 2 13:07:22.153827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf-rootfs.mount: Deactivated successfully. Mar 2 13:07:22.154116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120-rootfs.mount: Deactivated successfully. Mar 2 13:07:22.154235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120-shm.mount: Deactivated successfully. Mar 2 13:07:22.154358 systemd[1]: var-lib-kubelet-pods-a8a2edeb\x2d2d52\x2d4286\x2d9ff3\x2d468cffdbf3df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlkkcs.mount: Deactivated successfully. Mar 2 13:07:22.154472 systemd[1]: var-lib-kubelet-pods-5c90a5e1\x2d8c29\x2d42d3\x2dadd8\x2d83bd26f96c60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdswl.mount: Deactivated successfully. Mar 2 13:07:22.154558 systemd[1]: var-lib-kubelet-pods-5c90a5e1\x2d8c29\x2d42d3\x2dadd8\x2d83bd26f96c60-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:07:22.154635 systemd[1]: var-lib-kubelet-pods-5c90a5e1\x2d8c29\x2d42d3\x2dadd8\x2d83bd26f96c60-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:07:22.446127 kubelet[2554]: I0302 13:07:22.445489 2554 scope.go:122] "RemoveContainer" containerID="83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435" Mar 2 13:07:22.447486 containerd[1469]: time="2026-03-02T13:07:22.447027694Z" level=info msg="RemoveContainer for \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\"" Mar 2 13:07:22.453086 containerd[1469]: time="2026-03-02T13:07:22.453001318Z" level=info msg="RemoveContainer for \"83435d4d55fede14b24d1f3de4470c8272be3bbb91dc56b609b1f7036295e435\" returns successfully" Mar 2 13:07:22.453348 kubelet[2554]: I0302 13:07:22.453325 2554 scope.go:122] "RemoveContainer" containerID="681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9" Mar 2 13:07:22.454488 systemd[1]: Removed slice kubepods-burstable-pod5c90a5e1_8c29_42d3_add8_83bd26f96c60.slice - libcontainer container kubepods-burstable-pod5c90a5e1_8c29_42d3_add8_83bd26f96c60.slice. Mar 2 13:07:22.454876 containerd[1469]: time="2026-03-02T13:07:22.454558184Z" level=info msg="RemoveContainer for \"681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9\"" Mar 2 13:07:22.455196 systemd[1]: kubepods-burstable-pod5c90a5e1_8c29_42d3_add8_83bd26f96c60.slice: Consumed 8.692s CPU time. Mar 2 13:07:22.459479 containerd[1469]: time="2026-03-02T13:07:22.459429678Z" level=info msg="RemoveContainer for \"681c0e54ebe2db21e2e31497c928527594f3c8232e3e774d4ef7577d143194c9\" returns successfully" Mar 2 13:07:22.459811 kubelet[2554]: I0302 13:07:22.459693 2554 scope.go:122] "RemoveContainer" containerID="e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7" Mar 2 13:07:22.462322 containerd[1469]: time="2026-03-02T13:07:22.460819584Z" level=info msg="RemoveContainer for \"e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7\"" Mar 2 13:07:22.487545 containerd[1469]: time="2026-03-02T13:07:22.487427343Z" level=info msg="RemoveContainer for \"e65f3b72ab726fbebe76f8808604bb5fbe3f4d58f9941b7e690c1bf62e1078c7\" returns successfully" Mar 2 13:07:22.488082 kubelet[2554]: I0302 13:07:22.488026 2554 scope.go:122] "RemoveContainer" containerID="9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17" Mar 2 13:07:22.490226 containerd[1469]: time="2026-03-02T13:07:22.490140709Z" level=info msg="RemoveContainer for \"9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17\"" Mar 2 13:07:22.494471 containerd[1469]: time="2026-03-02T13:07:22.494378353Z" level=info msg="RemoveContainer for \"9a99c838b39d645d89b703b1a92b30235b2c8b6f9298d2b1c5afe197ef8dcf17\" returns successfully" Mar 2 13:07:22.494692 kubelet[2554]: I0302 13:07:22.494627 2554 scope.go:122] "RemoveContainer" containerID="42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d" Mar 2 13:07:22.495876 containerd[1469]: time="2026-03-02T13:07:22.495837425Z" level=info msg="RemoveContainer for \"42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d\"" Mar 2 13:07:22.504494 containerd[1469]: time="2026-03-02T13:07:22.504427975Z" level=info msg="RemoveContainer for \"42134bb13faf73b33ed7485692b52be050970e57f3bc26105ac071fb8a25d34d\" returns successfully" Mar 2 13:07:22.700607 kubelet[2554]: E0302 13:07:22.700410 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:22.703750 kubelet[2554]: I0302 13:07:22.703679 2554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5c90a5e1-8c29-42d3-add8-83bd26f96c60" path="/var/lib/kubelet/pods/5c90a5e1-8c29-42d3-add8-83bd26f96c60/volumes" Mar 2 13:07:22.705490 kubelet[2554]: I0302 13:07:22.705357 2554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a8a2edeb-2d52-4286-9ff3-468cffdbf3df" path="/var/lib/kubelet/pods/a8a2edeb-2d52-4286-9ff3-468cffdbf3df/volumes" Mar 2 13:07:23.053432 sshd[4232]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:23.062387 systemd[1]: sshd@25-10.0.0.87:22-10.0.0.1:38350.service: Deactivated successfully. Mar 2 13:07:23.065152 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:07:23.067467 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:07:23.074461 systemd[1]: Started sshd@26-10.0.0.87:22-10.0.0.1:34712.service - OpenSSH per-connection server daemon (10.0.0.1:34712). Mar 2 13:07:23.076324 systemd-logind[1448]: Removed session 26. Mar 2 13:07:23.121581 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 34712 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:23.124766 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:23.132819 systemd-logind[1448]: New session 27 of user core. Mar 2 13:07:23.143271 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:07:23.699668 kubelet[2554]: E0302 13:07:23.699406 2554 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-j7nng" podUID="12dfd643-3752-44c6-8b4e-1b7955ff24d0" Mar 2 13:07:23.766999 sshd[4397]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:23.779584 systemd[1]: sshd@26-10.0.0.87:22-10.0.0.1:34712.service: Deactivated successfully. Mar 2 13:07:23.782281 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:07:23.785350 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:07:23.794616 systemd[1]: Started sshd@27-10.0.0.87:22-10.0.0.1:34718.service - OpenSSH per-connection server daemon (10.0.0.1:34718). Mar 2 13:07:23.797838 systemd-logind[1448]: Removed session 27. Mar 2 13:07:23.839585 systemd[1]: Created slice kubepods-burstable-pod9a829bc6_40b3_47f7_a649_7d95548ce202.slice - libcontainer container kubepods-burstable-pod9a829bc6_40b3_47f7_a649_7d95548ce202.slice. Mar 2 13:07:23.862359 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 34718 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:23.865444 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:23.873621 kubelet[2554]: I0302 13:07:23.873465 2554 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:07:23Z","lastTransitionTime":"2026-03-02T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:07:23.879415 systemd-logind[1448]: New session 28 of user core. Mar 2 13:07:23.884232 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:07:23.952432 sshd[4410]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:23.962743 systemd[1]: sshd@27-10.0.0.87:22-10.0.0.1:34718.service: Deactivated successfully. Mar 2 13:07:23.965829 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:07:23.969079 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:07:23.976725 systemd[1]: Started sshd@28-10.0.0.87:22-10.0.0.1:34724.service - OpenSSH per-connection server daemon (10.0.0.1:34724). Mar 2 13:07:23.978325 systemd-logind[1448]: Removed session 28. Mar 2 13:07:23.989397 kubelet[2554]: I0302 13:07:23.989264 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-host-proc-sys-kernel\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989552 kubelet[2554]: I0302 13:07:23.989421 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-bpf-maps\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989552 kubelet[2554]: I0302 13:07:23.989503 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-hostproc\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989630 kubelet[2554]: I0302 13:07:23.989606 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a829bc6-40b3-47f7-a649-7d95548ce202-clustermesh-secrets\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989829 kubelet[2554]: I0302 13:07:23.989725 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-cilium-run\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989829 kubelet[2554]: I0302 13:07:23.989773 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-lib-modules\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989829 kubelet[2554]: I0302 13:07:23.989794 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-xtables-lock\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.989829 kubelet[2554]: I0302 13:07:23.989817 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a829bc6-40b3-47f7-a649-7d95548ce202-hubble-tls\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990089 kubelet[2554]: I0302 13:07:23.990028 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a829bc6-40b3-47f7-a649-7d95548ce202-cilium-config-path\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990089 kubelet[2554]: I0302 13:07:23.990071 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-host-proc-sys-net\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990174 kubelet[2554]: I0302 13:07:23.990093 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a829bc6-40b3-47f7-a649-7d95548ce202-cilium-ipsec-secrets\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990174 kubelet[2554]: I0302 13:07:23.990116 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqbg\" (UniqueName: \"kubernetes.io/projected/9a829bc6-40b3-47f7-a649-7d95548ce202-kube-api-access-gjqbg\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990174 kubelet[2554]: I0302 13:07:23.990137 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-cni-path\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990174 kubelet[2554]: I0302 13:07:23.990160 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-cilium-cgroup\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:23.990318 kubelet[2554]: I0302 13:07:23.990184 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a829bc6-40b3-47f7-a649-7d95548ce202-etc-cni-netd\") pod \"cilium-h9m8g\" (UID: \"9a829bc6-40b3-47f7-a649-7d95548ce202\") " pod="kube-system/cilium-h9m8g" Mar 2 13:07:24.035314 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 34724 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:07:24.038119 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:24.046117 systemd-logind[1448]: New session 29 of user core. Mar 2 13:07:24.058285 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:07:24.151437 kubelet[2554]: E0302 13:07:24.151232 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:24.152981 containerd[1469]: time="2026-03-02T13:07:24.152159754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9m8g,Uid:9a829bc6-40b3-47f7-a649-7d95548ce202,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:24.198453 containerd[1469]: time="2026-03-02T13:07:24.198116799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:24.198453 containerd[1469]: time="2026-03-02T13:07:24.198263354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:24.198453 containerd[1469]: time="2026-03-02T13:07:24.198275226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:24.198453 containerd[1469]: time="2026-03-02T13:07:24.198364954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:24.271358 systemd[1]: Started cri-containerd-e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234.scope - libcontainer container e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234. Mar 2 13:07:24.312794 containerd[1469]: time="2026-03-02T13:07:24.312754611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9m8g,Uid:9a829bc6-40b3-47f7-a649-7d95548ce202,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\"" Mar 2 13:07:24.314561 kubelet[2554]: E0302 13:07:24.314285 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:24.344564 containerd[1469]: time="2026-03-02T13:07:24.344357782Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:07:24.380342 containerd[1469]: time="2026-03-02T13:07:24.380225182Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec\"" Mar 2 13:07:24.383234 containerd[1469]: time="2026-03-02T13:07:24.381528643Z" level=info msg="StartContainer for \"1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec\"" Mar 2 13:07:24.452336 systemd[1]: Started cri-containerd-1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec.scope - libcontainer container 1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec. Mar 2 13:07:24.498166 containerd[1469]: time="2026-03-02T13:07:24.498037804Z" level=info msg="StartContainer for \"1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec\" returns successfully" Mar 2 13:07:24.520399 systemd[1]: cri-containerd-1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec.scope: Deactivated successfully. Mar 2 13:07:24.548114 kubelet[2554]: E0302 13:07:24.546673 2554 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a829bc6_40b3_47f7_a649_7d95548ce202.slice/cri-containerd-1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec.scope\": RecentStats: unable to find data in memory cache]" Mar 2 13:07:24.592388 containerd[1469]: time="2026-03-02T13:07:24.592257270Z" level=info msg="shim disconnected" id=1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec namespace=k8s.io Mar 2 13:07:24.592388 containerd[1469]: time="2026-03-02T13:07:24.592356125Z" level=warning msg="cleaning up after shim disconnected" id=1c768b4e4de9ba4a666cd7500eef66103a4dd4ac0e4ae6b3b7964a9a0a636fec namespace=k8s.io Mar 2 13:07:24.592388 containerd[1469]: time="2026-03-02T13:07:24.592373257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:25.460761 kubelet[2554]: E0302 13:07:25.460611 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:25.468882 containerd[1469]: time="2026-03-02T13:07:25.468715372Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:07:25.492056 containerd[1469]: time="2026-03-02T13:07:25.491897825Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8\"" Mar 2 13:07:25.493011 containerd[1469]: time="2026-03-02T13:07:25.492879861Z" level=info msg="StartContainer for \"6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8\"" Mar 2 13:07:25.569185 systemd[1]: Started cri-containerd-6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8.scope - libcontainer container 6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8. Mar 2 13:07:25.626669 containerd[1469]: time="2026-03-02T13:07:25.626369907Z" level=info msg="StartContainer for \"6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8\" returns successfully" Mar 2 13:07:25.649228 systemd[1]: cri-containerd-6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8.scope: Deactivated successfully. Mar 2 13:07:25.687425 containerd[1469]: time="2026-03-02T13:07:25.687310747Z" level=info msg="shim disconnected" id=6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8 namespace=k8s.io Mar 2 13:07:25.687425 containerd[1469]: time="2026-03-02T13:07:25.687404062Z" level=warning msg="cleaning up after shim disconnected" id=6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8 namespace=k8s.io Mar 2 13:07:25.687425 containerd[1469]: time="2026-03-02T13:07:25.687417516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:25.700044 kubelet[2554]: E0302 13:07:25.699852 2554 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-j7nng" podUID="12dfd643-3752-44c6-8b4e-1b7955ff24d0" Mar 2 13:07:26.103692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ea1992b87c523aa9dd7a075ddcf783612d904424c68b9617e556b3c19ea06a8-rootfs.mount: Deactivated successfully. Mar 2 13:07:26.466465 kubelet[2554]: E0302 13:07:26.466426 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:26.474560 containerd[1469]: time="2026-03-02T13:07:26.474458957Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:07:26.503802 containerd[1469]: time="2026-03-02T13:07:26.503708982Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a\"" Mar 2 13:07:26.504704 containerd[1469]: time="2026-03-02T13:07:26.504557858Z" level=info msg="StartContainer for \"f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a\"" Mar 2 13:07:26.573199 systemd[1]: Started cri-containerd-f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a.scope - libcontainer container f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a. Mar 2 13:07:26.600567 kubelet[2554]: E0302 13:07:26.600469 2554 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:07:26.624608 containerd[1469]: time="2026-03-02T13:07:26.623615605Z" level=info msg="StartContainer for \"f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a\" returns successfully" Mar 2 13:07:26.631791 systemd[1]: cri-containerd-f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a.scope: Deactivated successfully. Mar 2 13:07:26.672584 containerd[1469]: time="2026-03-02T13:07:26.672405073Z" level=info msg="shim disconnected" id=f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a namespace=k8s.io Mar 2 13:07:26.672584 containerd[1469]: time="2026-03-02T13:07:26.672514939Z" level=warning msg="cleaning up after shim disconnected" id=f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a namespace=k8s.io Mar 2 13:07:26.672584 containerd[1469]: time="2026-03-02T13:07:26.672537702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:27.103277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f77b25f059f1884ef5ca90c200864f91c5d8295678956a078d43364197b1bb4a-rootfs.mount: Deactivated successfully. Mar 2 13:07:27.471756 kubelet[2554]: E0302 13:07:27.471560 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:27.477565 containerd[1469]: time="2026-03-02T13:07:27.477476262Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:07:27.498843 containerd[1469]: time="2026-03-02T13:07:27.498647950Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0\"" Mar 2 13:07:27.499824 containerd[1469]: time="2026-03-02T13:07:27.499770326Z" level=info msg="StartContainer for \"38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0\"" Mar 2 13:07:27.559225 systemd[1]: Started cri-containerd-38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0.scope - libcontainer container 38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0. Mar 2 13:07:27.592279 systemd[1]: cri-containerd-38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0.scope: Deactivated successfully. Mar 2 13:07:27.596386 containerd[1469]: time="2026-03-02T13:07:27.596314776Z" level=info msg="StartContainer for \"38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0\" returns successfully" Mar 2 13:07:27.640788 containerd[1469]: time="2026-03-02T13:07:27.640695180Z" level=info msg="shim disconnected" id=38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0 namespace=k8s.io Mar 2 13:07:27.640788 containerd[1469]: time="2026-03-02T13:07:27.640770901Z" level=warning msg="cleaning up after shim disconnected" id=38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0 namespace=k8s.io Mar 2 13:07:27.640788 containerd[1469]: time="2026-03-02T13:07:27.640784566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:07:27.699526 kubelet[2554]: E0302 13:07:27.699464 2554 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-j7nng" podUID="12dfd643-3752-44c6-8b4e-1b7955ff24d0" Mar 2 13:07:28.104070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38619419ba4ce716479fb047bb997c17fb21d927558aba0cea19df09ea31cbe0-rootfs.mount: Deactivated successfully. Mar 2 13:07:28.478492 kubelet[2554]: E0302 13:07:28.478356 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:28.485568 containerd[1469]: time="2026-03-02T13:07:28.485353196Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:07:28.515585 containerd[1469]: time="2026-03-02T13:07:28.515471036Z" level=info msg="CreateContainer within sandbox \"e4bf0c86ac0dcf29f7f9a8d42161fddbf07612ca5164f0298f636e807e90f234\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"920842f9e573dc88dd285ad5421ff9d146a35ccee3973148d49ed05396a4afa3\"" Mar 2 13:07:28.519416 containerd[1469]: time="2026-03-02T13:07:28.517184750Z" level=info msg="StartContainer for \"920842f9e573dc88dd285ad5421ff9d146a35ccee3973148d49ed05396a4afa3\"" Mar 2 13:07:28.566280 systemd[1]: Started cri-containerd-920842f9e573dc88dd285ad5421ff9d146a35ccee3973148d49ed05396a4afa3.scope - libcontainer container 920842f9e573dc88dd285ad5421ff9d146a35ccee3973148d49ed05396a4afa3. Mar 2 13:07:28.617738 containerd[1469]: time="2026-03-02T13:07:28.617549009Z" level=info msg="StartContainer for \"920842f9e573dc88dd285ad5421ff9d146a35ccee3973148d49ed05396a4afa3\" returns successfully" Mar 2 13:07:29.206127 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 2 13:07:29.486162 kubelet[2554]: E0302 13:07:29.485914 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:29.512174 kubelet[2554]: I0302 13:07:29.510578 2554 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-h9m8g" podStartSLOduration=6.510562451 podStartE2EDuration="6.510562451s" podCreationTimestamp="2026-03-02 13:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:07:29.507549034 +0000 UTC m=+111.001653233" watchObservedRunningTime="2026-03-02 13:07:29.510562451 +0000 UTC m=+111.004666630" Mar 2 13:07:29.699805 kubelet[2554]: E0302 13:07:29.699714 2554 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-j7nng" podUID="12dfd643-3752-44c6-8b4e-1b7955ff24d0" Mar 2 13:07:30.488045 kubelet[2554]: E0302 13:07:30.487865 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:31.490571 kubelet[2554]: E0302 13:07:31.490479 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:31.700185 kubelet[2554]: E0302 13:07:31.699917 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:32.869915 systemd-networkd[1399]: lxc_health: Link UP Mar 2 13:07:32.880146 systemd-networkd[1399]: lxc_health: Gained carrier Mar 2 13:07:34.151324 kubelet[2554]: E0302 13:07:34.151234 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:34.497862 kubelet[2554]: E0302 13:07:34.497728 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:34.671307 systemd-networkd[1399]: lxc_health: Gained IPv6LL Mar 2 13:07:35.500197 kubelet[2554]: E0302 13:07:35.500089 2554 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:38.671440 containerd[1469]: time="2026-03-02T13:07:38.671338670Z" level=info msg="StopPodSandbox for \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\"" Mar 2 13:07:38.672393 containerd[1469]: time="2026-03-02T13:07:38.671513178Z" level=info msg="TearDown network for sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" successfully" Mar 2 13:07:38.672393 containerd[1469]: time="2026-03-02T13:07:38.671533796Z" level=info msg="StopPodSandbox for \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" returns successfully" Mar 2 13:07:38.672826 containerd[1469]: time="2026-03-02T13:07:38.672752676Z" level=info msg="RemovePodSandbox for \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\"" Mar 2 13:07:38.672826 containerd[1469]: time="2026-03-02T13:07:38.672815664Z" level=info msg="Forcibly stopping sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\"" Mar 2 13:07:38.673054 containerd[1469]: time="2026-03-02T13:07:38.672908097Z" level=info msg="TearDown network for sandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" successfully" Mar 2 13:07:38.691132 containerd[1469]: time="2026-03-02T13:07:38.690914462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:07:38.691132 containerd[1469]: time="2026-03-02T13:07:38.691102485Z" level=info msg="RemovePodSandbox \"5e77e68d7da6dcd3f86e50e26fd664a9250283f182184598ec2db9a1ea9ce120\" returns successfully" Mar 2 13:07:38.694732 containerd[1469]: time="2026-03-02T13:07:38.694624570Z" level=info msg="StopPodSandbox for \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\"" Mar 2 13:07:38.694839 containerd[1469]: time="2026-03-02T13:07:38.694786834Z" level=info msg="TearDown network for sandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" successfully" Mar 2 13:07:38.694839 containerd[1469]: time="2026-03-02T13:07:38.694807903Z" level=info msg="StopPodSandbox for \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" returns successfully" Mar 2 13:07:38.695718 containerd[1469]: time="2026-03-02T13:07:38.695663228Z" level=info msg="RemovePodSandbox for \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\"" Mar 2 13:07:38.695812 containerd[1469]: time="2026-03-02T13:07:38.695721017Z" level=info msg="Forcibly stopping sandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\"" Mar 2 13:07:38.695848 containerd[1469]: time="2026-03-02T13:07:38.695807811Z" level=info msg="TearDown network for sandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" successfully" Mar 2 13:07:38.700901 containerd[1469]: time="2026-03-02T13:07:38.700797824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:07:38.700901 containerd[1469]: time="2026-03-02T13:07:38.700851233Z" level=info msg="RemovePodSandbox \"2abd2b76b516400e9a4e5d8ec87d572c5315123e2766497a7365307c80b5f2cf\" returns successfully" Mar 2 13:07:39.195585 sshd[4420]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:39.202159 systemd[1]: sshd@28-10.0.0.87:22-10.0.0.1:34724.service: Deactivated successfully. Mar 2 13:07:39.205831 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:07:39.207508 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:07:39.209358 systemd-logind[1448]: Removed session 29.