Mar 3 13:44:15.233443 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:44:15.233476 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:44:15.233517 kernel: BIOS-provided physical RAM map: Mar 3 13:44:15.233528 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 3 13:44:15.233539 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 3 13:44:15.233547 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 3 13:44:15.233559 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 3 13:44:15.233568 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 3 13:44:15.233601 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 3 13:44:15.233611 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 3 13:44:15.233620 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:44:15.233696 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 3 13:44:15.233706 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:44:15.233715 kernel: NX (Execute Disable) protection: active Mar 3 13:44:15.233727 kernel: APIC: Static calls initialized Mar 3 13:44:15.233737 kernel: SMBIOS 2.8 present. Mar 3 13:44:15.233791 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 3 13:44:15.233802 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:44:15.233813 kernel: Hypervisor detected: KVM Mar 3 13:44:15.233823 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 3 13:44:15.233834 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:44:15.233844 kernel: kvm-clock: using sched offset of 11562766734 cycles Mar 3 13:44:15.233855 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:44:15.233865 kernel: tsc: Detected 2445.426 MHz processor Mar 3 13:44:15.233876 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:44:15.233887 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:44:15.233925 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 3 13:44:15.233936 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 3 13:44:15.233948 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:44:15.233960 kernel: Using GB pages for direct mapping Mar 3 13:44:15.233971 kernel: ACPI: Early table checksum verification disabled Mar 3 13:44:15.233981 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 3 13:44:15.233991 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234002 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234012 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234050 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 3 13:44:15.234063 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234120 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234133 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234144 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:44:15.234182 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 3 13:44:15.234214 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 3 13:44:15.234225 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 3 13:44:15.234236 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 3 13:44:15.234247 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 3 13:44:15.234258 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 3 13:44:15.234268 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 3 13:44:15.234279 kernel: No NUMA configuration found Mar 3 13:44:15.234292 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 3 13:44:15.234329 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 3 13:44:15.234340 kernel: Zone ranges: Mar 3 13:44:15.234351 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:44:15.234362 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 3 13:44:15.234373 kernel: Normal empty Mar 3 13:44:15.234384 kernel: Device empty Mar 3 13:44:15.234394 kernel: Movable zone start for each node Mar 3 13:44:15.234405 kernel: Early memory node ranges Mar 3 13:44:15.234416 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 3 13:44:15.234427 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 3 13:44:15.234460 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 3 13:44:15.234471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:44:15.234481 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 3 13:44:15.234516 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 3 13:44:15.234528 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 3 13:44:15.234539 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:44:15.234550 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 3 13:44:15.234561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 3 13:44:15.234591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:44:15.234622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:44:15.234633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:44:15.234684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:44:15.234696 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:44:15.234707 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:44:15.234718 kernel: TSC deadline timer available Mar 3 13:44:15.234729 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:44:15.234739 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:44:15.234750 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:44:15.234793 kernel: CPU topo: Max. threads per core: 1 Mar 3 13:44:15.234804 kernel: CPU topo: Num. cores per package: 4 Mar 3 13:44:15.234815 kernel: CPU topo: Num. threads per package: 4 Mar 3 13:44:15.234825 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 3 13:44:15.234836 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:44:15.234847 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 3 13:44:15.234858 kernel: kvm-guest: setup PV sched yield Mar 3 13:44:15.234868 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 3 13:44:15.234879 kernel: Booting paravirtualized kernel on KVM Mar 3 13:44:15.234914 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:44:15.234927 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 3 13:44:15.234939 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 3 13:44:15.234951 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 3 13:44:15.234963 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 3 13:44:15.234973 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:44:15.234984 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:44:15.234997 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:44:15.235008 kernel: random: crng init done Mar 3 13:44:15.235048 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 13:44:15.235060 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:44:15.235071 kernel: Fallback order for Node 0: 0 Mar 3 13:44:15.235130 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 3 13:44:15.235142 kernel: Policy zone: DMA32 Mar 3 13:44:15.235153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:44:15.235164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 3 13:44:15.235174 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:44:15.235185 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:44:15.235220 kernel: Dynamic Preempt: voluntary Mar 3 13:44:15.235231 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:44:15.235243 kernel: rcu: RCU event tracing is enabled. Mar 3 13:44:15.235255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 3 13:44:15.235267 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:44:15.235299 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:44:15.235310 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:44:15.235322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:44:15.235332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 3 13:44:15.235365 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:44:15.235377 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:44:15.235388 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:44:15.235399 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 3 13:44:15.235410 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:44:15.235485 kernel: Console: colour VGA+ 80x25 Mar 3 13:44:15.235522 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:44:15.235534 kernel: ACPI: Core revision 20240827 Mar 3 13:44:15.235545 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 3 13:44:15.235557 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:44:15.235568 kernel: x2apic enabled Mar 3 13:44:15.235580 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:44:15.235614 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 3 13:44:15.235627 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 3 13:44:15.235638 kernel: kvm-guest: setup PV IPIs Mar 3 13:44:15.235693 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 3 13:44:15.235709 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:44:15.235720 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 3 13:44:15.235733 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 3 13:44:15.235745 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 3 13:44:15.235756 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 3 13:44:15.235768 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:44:15.235780 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:44:15.235791 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:44:15.235803 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:44:15.235818 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 3 13:44:15.235830 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 3 13:44:15.235842 kernel: active return thunk: srso_alias_return_thunk Mar 3 13:44:15.235853 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 3 13:44:15.235865 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 3 13:44:15.235876 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:44:15.235888 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:44:15.235900 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:44:15.235914 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:44:15.235927 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:44:15.235939 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 3 13:44:15.235953 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:44:15.235964 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:44:15.235975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:44:15.235988 kernel: landlock: Up and running. Mar 3 13:44:15.236000 kernel: SELinux: Initializing. Mar 3 13:44:15.236012 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:44:15.236028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:44:15.236064 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 3 13:44:15.236119 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 3 13:44:15.236132 kernel: signal: max sigframe size: 1776 Mar 3 13:44:15.236143 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:44:15.236156 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:44:15.236168 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:44:15.236179 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:44:15.236191 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:44:15.236207 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:44:15.236220 kernel: .... node #0, CPUs: #1 #2 #3 Mar 3 13:44:15.236231 kernel: smp: Brought up 1 node, 4 CPUs Mar 3 13:44:15.236243 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 3 13:44:15.236256 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 3 13:44:15.236267 kernel: devtmpfs: initialized Mar 3 13:44:15.236278 kernel: x86/mm: Memory block size: 128MB Mar 3 13:44:15.236290 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:44:15.236302 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 3 13:44:15.236317 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:44:15.236329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:44:15.236340 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:44:15.236352 kernel: audit: type=2000 audit(1772545450.017:1): state=initialized audit_enabled=0 res=1 Mar 3 13:44:15.236363 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:44:15.236375 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:44:15.236386 kernel: cpuidle: using governor menu Mar 3 13:44:15.236397 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:44:15.236409 kernel: dca service started, version 1.12.1 Mar 3 13:44:15.236425 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 3 13:44:15.236437 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 3 13:44:15.236449 kernel: PCI: Using configuration type 1 for base access Mar 3 13:44:15.236460 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:44:15.236472 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:44:15.236483 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:44:15.236495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:44:15.236507 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:44:15.236518 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:44:15.236533 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:44:15.236544 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:44:15.236555 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 13:44:15.236567 kernel: ACPI: Interpreter enabled Mar 3 13:44:15.236578 kernel: ACPI: PM: (supports S0 S3 S5) Mar 3 13:44:15.236589 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:44:15.236601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:44:15.236612 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:44:15.236624 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 3 13:44:15.236638 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:44:15.237279 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:44:15.237719 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 3 13:44:15.238310 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 3 13:44:15.238330 kernel: PCI host bridge to bus 0000:00 Mar 3 13:44:15.238746 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:44:15.238939 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:44:15.239201 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:44:15.239385 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 3 13:44:15.239564 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 3 13:44:15.239788 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 3 13:44:15.239981 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:44:15.240357 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:44:15.240632 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:44:15.240875 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 3 13:44:15.241138 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 3 13:44:15.241342 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 3 13:44:15.241536 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:44:15.241815 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 3 13:44:15.242024 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 3 13:44:15.242396 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 3 13:44:15.242596 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 3 13:44:15.242896 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 3 13:44:15.243160 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 3 13:44:15.243421 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 3 13:44:15.243622 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 3 13:44:15.243970 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 3 13:44:15.244251 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 3 13:44:15.244451 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 3 13:44:15.244687 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 3 13:44:15.244887 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 3 13:44:15.245216 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:44:15.245418 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 3 13:44:15.245707 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 3 13:44:15.245915 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 3 13:44:15.246248 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 3 13:44:15.246538 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 3 13:44:15.246786 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 3 13:44:15.246806 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:44:15.246818 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:44:15.246830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:44:15.246848 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:44:15.246859 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 3 13:44:15.246871 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 3 13:44:15.246883 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 3 13:44:15.246894 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 3 13:44:15.246905 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 3 13:44:15.246917 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 3 13:44:15.246931 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 3 13:44:15.246943 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 3 13:44:15.246961 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 3 13:44:15.246973 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 3 13:44:15.246984 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 3 13:44:15.246996 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 3 13:44:15.247007 kernel: iommu: Default domain type: Translated Mar 3 13:44:15.247018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:44:15.247030 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:44:15.247041 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:44:15.247055 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 3 13:44:15.247071 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 3 13:44:15.247329 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 3 13:44:15.247526 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 3 13:44:15.247780 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:44:15.247799 kernel: vgaarb: loaded Mar 3 13:44:15.247811 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 3 13:44:15.247823 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 3 13:44:15.247836 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:44:15.247856 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:44:15.247868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:44:15.247879 kernel: pnp: PnP ACPI init Mar 3 13:44:15.248291 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 3 13:44:15.248311 kernel: pnp: PnP ACPI: found 6 devices Mar 3 13:44:15.248324 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:44:15.248335 kernel: NET: Registered PF_INET protocol family Mar 3 13:44:15.248347 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 13:44:15.248364 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 13:44:15.248376 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:44:15.248388 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:44:15.248399 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 13:44:15.248411 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 13:44:15.248424 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:44:15.248435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:44:15.248447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:44:15.248458 kernel: NET: Registered PF_XDP protocol family Mar 3 13:44:15.248708 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:44:15.248903 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:44:15.249150 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:44:15.249336 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 3 13:44:15.249515 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 3 13:44:15.249743 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 3 13:44:15.249763 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:44:15.249776 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:44:15.249794 kernel: Initialise system trusted keyrings Mar 3 13:44:15.249806 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 13:44:15.249817 kernel: Key type asymmetric registered Mar 3 13:44:15.249829 kernel: Asymmetric key parser 'x509' registered Mar 3 13:44:15.249842 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:44:15.249854 kernel: io scheduler mq-deadline registered Mar 3 13:44:15.249866 kernel: io scheduler kyber registered Mar 3 13:44:15.249878 kernel: io scheduler bfq registered Mar 3 13:44:15.249891 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:44:15.249908 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 3 13:44:15.249921 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 3 13:44:15.249934 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 3 13:44:15.249947 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:44:15.249960 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:44:15.249973 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:44:15.249984 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:44:15.249996 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:44:15.250404 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 3 13:44:15.250436 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:44:15.250631 kernel: rtc_cmos 00:04: registered as rtc0 Mar 3 13:44:15.250889 kernel: rtc_cmos 00:04: setting system clock to 2026-03-03T13:44:14 UTC (1772545454) Mar 3 13:44:15.251154 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 3 13:44:15.251173 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 3 13:44:15.251185 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:44:15.251197 kernel: Segment Routing with IPv6 Mar 3 13:44:15.251209 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:44:15.251226 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:44:15.251238 kernel: Key type dns_resolver registered Mar 3 13:44:15.251252 kernel: IPI shorthand broadcast: enabled Mar 3 13:44:15.251263 kernel: sched_clock: Marking stable (4254027477, 438159119)->(4855061376, -162874780) Mar 3 13:44:15.251276 kernel: registered taskstats version 1 Mar 3 13:44:15.251287 kernel: Loading compiled-in X.509 certificates Mar 3 13:44:15.251299 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:44:15.251310 kernel: Demotion targets for Node 0: null Mar 3 13:44:15.251321 kernel: Key type .fscrypt registered Mar 3 13:44:15.251336 kernel: Key type fscrypt-provisioning registered Mar 3 13:44:15.251348 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:44:15.251359 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:44:15.251371 kernel: ima: No architecture policies found Mar 3 13:44:15.251382 kernel: clk: Disabling unused clocks Mar 3 13:44:15.251394 kernel: Warning: unable to open an initial console. Mar 3 13:44:15.251406 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:44:15.251417 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:44:15.251429 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:44:15.251444 kernel: Run /init as init process Mar 3 13:44:15.251455 kernel: with arguments: Mar 3 13:44:15.251466 kernel: /init Mar 3 13:44:15.251478 kernel: with environment: Mar 3 13:44:15.251489 kernel: HOME=/ Mar 3 13:44:15.251500 kernel: TERM=linux Mar 3 13:44:15.251513 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:44:15.251530 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:44:15.251548 systemd[1]: Detected virtualization kvm. Mar 3 13:44:15.251560 systemd[1]: Detected architecture x86-64. Mar 3 13:44:15.251572 systemd[1]: Running in initrd. Mar 3 13:44:15.251583 systemd[1]: No hostname configured, using default hostname. Mar 3 13:44:15.251595 systemd[1]: Hostname set to . Mar 3 13:44:15.251607 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:44:15.251619 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:44:15.251631 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:44:15.251706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:44:15.251727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:44:15.251741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:44:15.251753 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:44:15.251767 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:44:15.251785 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:44:15.251797 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:44:15.251810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:44:15.251822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:44:15.251835 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:44:15.251847 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:44:15.251864 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:44:15.251877 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:44:15.251894 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:44:15.251912 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:44:15.251926 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:44:15.251940 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:44:15.251955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:44:15.251967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:44:15.251979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:44:15.251993 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:44:15.252010 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:44:15.252022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:44:15.252035 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:44:15.252048 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:44:15.252061 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:44:15.252141 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:44:15.252159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:44:15.252174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:44:15.252188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:44:15.252212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:44:15.252276 systemd-journald[203]: Collecting audit messages is disabled. Mar 3 13:44:15.252311 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:44:15.252327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:44:15.252342 systemd-journald[203]: Journal started Mar 3 13:44:15.252374 systemd-journald[203]: Runtime Journal (/run/log/journal/d6baa9629bcc4da5994dceb517df5970) is 6M, max 48.3M, 42.2M free. Mar 3 13:44:15.228852 systemd-modules-load[204]: Inserted module 'overlay' Mar 3 13:44:15.406559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:44:15.406605 kernel: Bridge firewalling registered Mar 3 13:44:15.406625 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:44:15.278384 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 3 13:44:15.414293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:44:15.418907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:44:15.430463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:44:15.431816 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:44:15.439570 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:44:15.462499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:44:15.467575 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:44:15.469756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:44:15.474038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:44:15.493559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:44:15.494163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:44:15.507156 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:44:15.515071 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:44:15.529956 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:44:15.553769 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:44:15.579602 systemd-resolved[243]: Positive Trust Anchors: Mar 3 13:44:15.579639 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:44:15.579699 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:44:15.582399 systemd-resolved[243]: Defaulting to hostname 'linux'. Mar 3 13:44:15.584257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:44:15.591736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:44:15.780166 kernel: SCSI subsystem initialized Mar 3 13:44:15.794191 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:44:15.811204 kernel: iscsi: registered transport (tcp) Mar 3 13:44:15.841721 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:44:15.841852 kernel: QLogic iSCSI HBA Driver Mar 3 13:44:15.874807 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:44:15.907925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:44:15.910810 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:44:15.994364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:44:15.996817 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:44:16.080192 kernel: raid6: avx2x4 gen() 32279 MB/s Mar 3 13:44:16.098194 kernel: raid6: avx2x2 gen() 28240 MB/s Mar 3 13:44:16.117715 kernel: raid6: avx2x1 gen() 21566 MB/s Mar 3 13:44:16.117812 kernel: raid6: using algorithm avx2x4 gen() 32279 MB/s Mar 3 13:44:16.138181 kernel: raid6: .... xor() 4359 MB/s, rmw enabled Mar 3 13:44:16.138245 kernel: raid6: using avx2x2 recovery algorithm Mar 3 13:44:16.163201 kernel: xor: automatically using best checksumming function avx Mar 3 13:44:17.287753 kernel: hrtimer: interrupt took 84313838 ns Mar 3 13:44:17.453464 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:44:17.590315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:44:17.601794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:44:17.868539 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 3 13:44:18.010884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:44:18.055299 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:44:18.178579 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Mar 3 13:44:18.278418 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:44:18.280956 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:44:18.468039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:44:18.479992 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:44:18.565185 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:44:18.585167 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 3 13:44:18.585539 kernel: AES CTR mode by8 optimization enabled Mar 3 13:44:18.598633 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 3 13:44:18.614546 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:44:18.614597 kernel: GPT:9289727 != 19775487 Mar 3 13:44:18.614613 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:44:18.619175 kernel: GPT:9289727 != 19775487 Mar 3 13:44:18.619214 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:44:18.651272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:44:18.795183 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 3 13:44:18.856389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:44:18.858426 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:44:18.909993 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:44:18.916170 kernel: libata version 3.00 loaded. Mar 3 13:44:18.939541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:44:18.949819 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:44:19.011650 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 3 13:44:19.057208 kernel: ahci 0000:00:1f.2: version 3.0 Mar 3 13:44:19.059202 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 3 13:44:19.063204 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 3 13:44:19.063487 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 3 13:44:19.063770 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 3 13:44:19.067749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 3 13:44:19.275709 kernel: scsi host0: ahci Mar 3 13:44:19.275984 kernel: scsi host1: ahci Mar 3 13:44:19.276274 kernel: scsi host2: ahci Mar 3 13:44:19.276530 kernel: scsi host3: ahci Mar 3 13:44:19.276767 kernel: scsi host4: ahci Mar 3 13:44:19.276977 kernel: scsi host5: ahci Mar 3 13:44:19.277227 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 3 13:44:19.277240 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 3 13:44:19.277251 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 3 13:44:19.277262 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 3 13:44:19.277272 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 3 13:44:19.277282 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 3 13:44:19.272472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:44:19.292462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:44:19.305519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 3 13:44:19.305742 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 3 13:44:19.329909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:44:19.386129 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 3 13:44:19.386199 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 3 13:44:19.387343 disk-uuid[618]: Primary Header is updated. Mar 3 13:44:19.387343 disk-uuid[618]: Secondary Entries is updated. Mar 3 13:44:19.387343 disk-uuid[618]: Secondary Header is updated. Mar 3 13:44:19.400881 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:44:19.400910 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 3 13:44:19.407545 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:44:19.407575 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 3 13:44:19.407617 kernel: ata3.00: applying bridge limits Mar 3 13:44:19.414584 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 3 13:44:19.423072 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 3 13:44:19.434202 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 3 13:44:19.434237 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:44:19.440181 kernel: ata3.00: configured for UDMA/100 Mar 3 13:44:19.451223 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 3 13:44:19.510998 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 3 13:44:19.511334 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 3 13:44:19.535177 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 3 13:44:20.063444 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:44:20.068839 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:44:20.077474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:44:20.081778 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:44:20.087263 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:44:20.130408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:44:20.437360 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:44:20.442184 disk-uuid[619]: The operation has completed successfully. Mar 3 13:44:20.530419 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:44:20.530740 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:44:20.587924 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:44:20.640714 sh[648]: Success Mar 3 13:44:20.672355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:44:20.672423 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:44:20.676026 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:44:20.692202 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 3 13:44:20.759157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:44:20.766206 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:44:20.791906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:44:20.806857 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (661) Mar 3 13:44:20.806880 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:44:20.812769 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:44:20.843338 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:44:20.843427 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:44:20.846409 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:44:20.852702 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:44:20.857015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:44:20.868815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:44:20.876398 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:44:20.939162 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (696) Mar 3 13:44:20.947222 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:44:20.947301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:44:20.957217 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:44:20.957333 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:44:20.971250 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:44:20.973319 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:44:20.974787 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:44:21.554581 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:44:21.570008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:44:21.655409 ignition[751]: Ignition 2.22.0 Mar 3 13:44:21.655500 ignition[751]: Stage: fetch-offline Mar 3 13:44:21.655805 ignition[751]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:44:21.655824 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:44:21.656265 ignition[751]: parsed url from cmdline: "" Mar 3 13:44:21.656272 ignition[751]: no config URL provided Mar 3 13:44:21.656280 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:44:21.656293 ignition[751]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:44:21.656434 ignition[751]: op(1): [started] loading QEMU firmware config module Mar 3 13:44:21.656442 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 3 13:44:21.691448 ignition[751]: op(1): [finished] loading QEMU firmware config module Mar 3 13:44:21.691976 systemd-networkd[838]: lo: Link UP Mar 3 13:44:21.691982 systemd-networkd[838]: lo: Gained carrier Mar 3 13:44:21.695024 systemd-networkd[838]: Enumeration completed Mar 3 13:44:21.695242 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:44:21.697512 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:44:21.697518 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:44:21.701594 systemd[1]: Reached target network.target - Network. Mar 3 13:44:21.703394 systemd-networkd[838]: eth0: Link UP Mar 3 13:44:21.703664 systemd-networkd[838]: eth0: Gained carrier Mar 3 13:44:21.703710 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:44:21.752287 systemd-networkd[838]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:44:21.969364 ignition[751]: parsing config with SHA512: 3336559fa5af6e6f0560e526950f93e8fdcc62ba49f3e9085bc19f1afb47ffa60682c7e74b2bd979060c4b4a3b0866b1f94c8a1d6770c1577a3b01974551ee5f Mar 3 13:44:22.177380 unknown[751]: fetched base config from "system" Mar 3 13:44:22.177421 unknown[751]: fetched user config from "qemu" Mar 3 13:44:22.180603 ignition[751]: fetch-offline: fetch-offline passed Mar 3 13:44:22.187382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:44:22.181008 ignition[751]: Ignition finished successfully Mar 3 13:44:22.194897 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 3 13:44:22.196805 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:44:22.359268 ignition[846]: Ignition 2.22.0 Mar 3 13:44:22.359313 ignition[846]: Stage: kargs Mar 3 13:44:22.359527 ignition[846]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:44:22.359544 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:44:22.364470 ignition[846]: kargs: kargs passed Mar 3 13:44:22.371636 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:44:22.364666 ignition[846]: Ignition finished successfully Mar 3 13:44:22.378395 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:44:22.503504 ignition[854]: Ignition 2.22.0 Mar 3 13:44:22.503555 ignition[854]: Stage: disks Mar 3 13:44:22.503858 ignition[854]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:44:22.508306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:44:22.503882 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:44:22.514857 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:44:22.505252 ignition[854]: disks: disks passed Mar 3 13:44:22.524362 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:44:22.505347 ignition[854]: Ignition finished successfully Mar 3 13:44:22.535901 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:44:22.539309 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:44:22.539402 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:44:22.556203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:44:22.609820 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:44:22.617845 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:44:22.631890 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:44:22.912253 kernel: EXT4-fs (vda9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:44:22.913509 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:44:22.916706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:44:22.925947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:44:22.934138 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:44:22.939597 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:44:22.939731 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:44:22.984644 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Mar 3 13:44:22.984725 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:44:22.984747 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:44:22.984764 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:44:22.984779 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:44:22.939784 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:44:22.954458 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:44:22.960734 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:44:22.985962 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:44:23.023754 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:44:23.033380 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:44:23.042487 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:44:23.048667 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:44:23.200937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:44:23.203932 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:44:23.209874 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:44:23.246326 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:44:23.252666 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:44:23.278490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:44:23.347807 ignition[986]: INFO : Ignition 2.22.0 Mar 3 13:44:23.347807 ignition[986]: INFO : Stage: mount Mar 3 13:44:23.352942 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:44:23.352942 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:44:23.352942 ignition[986]: INFO : mount: mount passed Mar 3 13:44:23.352942 ignition[986]: INFO : Ignition finished successfully Mar 3 13:44:23.373622 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:44:23.375829 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:44:23.683158 systemd-networkd[838]: eth0: Gained IPv6LL Mar 3 13:44:23.919376 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:44:23.988212 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Mar 3 13:44:23.994851 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:44:23.994908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:44:24.004481 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:44:24.004527 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:44:24.007355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:44:24.159649 ignition[1016]: INFO : Ignition 2.22.0 Mar 3 13:44:24.159649 ignition[1016]: INFO : Stage: files Mar 3 13:44:24.166359 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:44:24.166359 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:44:24.177393 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:44:24.177393 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:44:24.177393 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:44:24.198174 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:44:24.204503 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:44:24.211975 unknown[1016]: wrote ssh authorized keys file for user: core Mar 3 13:44:24.218293 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:44:24.235822 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:44:24.247309 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:44:24.568969 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:44:25.068184 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:44:25.068184 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 13:44:25.081834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 3 13:44:25.198479 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 3 13:44:25.703524 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 13:44:25.712372 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:44:25.719512 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:44:25.745659 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:44:25.755931 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:44:25.755931 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 3 13:44:25.769530 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 3 13:44:26.079212 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 3 13:44:29.376154 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 3 13:44:29.376154 ignition[1016]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 3 13:44:29.391872 ignition[1016]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 3 13:44:29.704787 ignition[1016]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:44:29.720476 ignition[1016]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:44:29.728929 ignition[1016]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 3 13:44:29.728929 ignition[1016]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:44:29.742581 ignition[1016]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:44:29.742581 ignition[1016]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:44:29.742581 ignition[1016]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:44:29.742581 ignition[1016]: INFO : files: files passed Mar 3 13:44:29.742581 ignition[1016]: INFO : Ignition finished successfully Mar 3 13:44:29.736293 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:44:29.743970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:44:29.752638 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:44:29.785277 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:44:29.799222 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Mar 3 13:44:29.785480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:44:29.809533 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:44:29.809533 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:44:29.839992 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:44:29.816786 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:44:29.840865 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:44:29.857274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:44:29.963600 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:44:29.963914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:44:29.967926 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:44:29.981554 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:44:29.990915 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:44:29.995749 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:44:30.187402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:44:30.190657 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:44:30.236677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:44:30.237150 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:44:30.251758 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:44:30.259496 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:44:30.259792 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:44:30.270659 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:44:30.279191 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:44:30.282615 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:44:30.290599 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:44:30.294024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:44:30.302316 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:44:30.311274 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:44:30.326324 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:44:30.329984 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:44:30.343559 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:44:30.346658 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:44:30.352911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:44:30.353051 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:44:30.367297 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:44:30.375439 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:44:30.379762 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:44:30.379973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:44:30.397801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:44:30.398035 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:44:30.411896 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:44:30.412213 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:44:30.416214 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:44:30.444483 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:44:30.448235 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:44:30.453990 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:44:30.461228 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:44:30.469432 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:44:30.469553 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:44:30.479398 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:44:30.479558 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:44:30.482895 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:44:30.483215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:44:30.495127 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:44:30.495301 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:44:30.506794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:44:30.511314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:44:30.534671 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:44:30.535014 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:44:30.539448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:44:30.539684 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:44:30.555911 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:44:30.557526 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:44:30.587619 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:44:30.593001 ignition[1071]: INFO : Ignition 2.22.0 Mar 3 13:44:30.593001 ignition[1071]: INFO : Stage: umount Mar 3 13:44:30.593001 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:44:30.593001 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:44:30.593001 ignition[1071]: INFO : umount: umount passed Mar 3 13:44:30.593001 ignition[1071]: INFO : Ignition finished successfully Mar 3 13:44:30.594192 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:44:30.594397 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:44:30.601637 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:44:30.601893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:44:30.609239 systemd[1]: Stopped target network.target - Network. Mar 3 13:44:30.620302 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:44:30.620440 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:44:30.637324 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:44:30.637504 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:44:30.645206 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:44:30.645287 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:44:30.649997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:44:30.650152 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:44:30.656888 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:44:30.656970 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:44:30.669577 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:44:30.676881 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:44:30.694499 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:44:30.694759 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:44:30.718458 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:44:30.719060 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:44:30.719379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:44:30.736368 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:44:30.739308 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:44:30.743378 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:44:30.743567 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:44:30.773766 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:44:30.775004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:44:30.775191 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:44:30.797379 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:44:30.797507 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:44:30.823068 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:44:30.823320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:44:30.829943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:44:30.830048 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:44:30.840923 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:44:30.847324 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:44:30.847449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:44:30.866813 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:44:30.867138 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:44:30.876463 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:44:30.876846 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:44:30.878070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:44:30.878229 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:44:30.889528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:44:30.889601 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:44:30.897213 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:44:30.897300 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:44:30.904381 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:44:30.904508 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:44:30.912488 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:44:30.912636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:44:30.931895 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:44:30.938428 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:44:30.938527 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:44:30.950550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:44:30.950648 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:44:30.972807 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 3 13:44:30.972917 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:44:30.988406 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:44:30.988515 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:44:30.994895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:44:30.994999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:44:31.019003 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 3 13:44:31.019140 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 3 13:44:31.019198 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 3 13:44:31.019250 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:44:31.019871 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:44:31.020068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:44:31.023648 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:44:31.035993 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:44:31.083606 systemd[1]: Switching root. Mar 3 13:44:31.125550 systemd-journald[203]: Journal stopped Mar 3 13:44:34.289971 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 3 13:44:34.290414 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:44:34.290441 kernel: SELinux: policy capability open_perms=1 Mar 3 13:44:34.290458 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:44:34.290473 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:44:34.290494 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:44:34.290514 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:44:34.290529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:44:34.290544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:44:34.290563 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:44:34.290579 kernel: audit: type=1403 audit(1772545471.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:44:34.290638 systemd[1]: Successfully loaded SELinux policy in 110.546ms. Mar 3 13:44:34.290773 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.187ms. Mar 3 13:44:34.290797 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:44:34.290819 systemd[1]: Detected virtualization kvm. Mar 3 13:44:34.290836 systemd[1]: Detected architecture x86-64. Mar 3 13:44:34.290852 systemd[1]: Detected first boot. Mar 3 13:44:34.290896 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:44:34.290918 zram_generator::config[1115]: No configuration found. Mar 3 13:44:34.290935 kernel: Guest personality initialized and is inactive Mar 3 13:44:34.290951 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:44:34.290966 kernel: Initialized host personality Mar 3 13:44:34.290982 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:44:34.290999 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:44:34.291040 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:44:34.291061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:44:34.291128 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:44:34.291151 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:44:34.291168 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:44:34.291185 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:44:34.291202 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:44:34.291218 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:44:34.291234 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:44:34.291251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:44:34.291275 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:44:34.291294 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:44:34.291311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:44:34.291327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:44:34.291344 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:44:34.291360 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:44:34.291378 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:44:34.291460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:44:34.291479 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:44:34.291500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:44:34.291541 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:44:34.291558 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:44:34.291574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:44:34.291590 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:44:34.291608 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:44:34.291624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:44:34.291641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:44:34.291657 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:44:34.291677 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:44:34.291694 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:44:34.291745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:44:34.291766 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:44:34.291782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:44:34.291824 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:44:34.291841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:44:34.291857 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:44:34.291873 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:44:34.291894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:44:34.291910 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:44:34.291927 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:44:34.291944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:44:34.291986 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:44:34.292003 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:44:34.292020 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:44:34.292037 systemd[1]: Reached target machines.target - Containers. Mar 3 13:44:34.292057 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:44:34.292124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:44:34.292145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:44:34.292186 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:44:34.292203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:44:34.292219 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:44:34.292236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:44:34.292254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:44:34.292270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:44:34.292292 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:44:34.292308 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:44:34.292325 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:44:34.292341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:44:34.292356 kernel: fuse: init (API version 7.41) Mar 3 13:44:34.292372 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:44:34.292389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:44:34.292406 kernel: ACPI: bus type drm_connector registered Mar 3 13:44:34.292425 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:44:34.292441 kernel: loop: module loaded Mar 3 13:44:34.292457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:44:34.292473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:44:34.292490 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:44:34.292506 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:44:34.292522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:44:34.292542 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:44:34.292559 systemd[1]: Stopped verity-setup.service. Mar 3 13:44:34.292576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:44:34.292672 systemd-journald[1200]: Collecting audit messages is disabled. Mar 3 13:44:34.292803 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:44:34.292823 systemd-journald[1200]: Journal started Mar 3 13:44:34.292911 systemd-journald[1200]: Runtime Journal (/run/log/journal/d6baa9629bcc4da5994dceb517df5970) is 6M, max 48.3M, 42.2M free. Mar 3 13:44:32.453697 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:44:32.468377 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 3 13:44:32.469518 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:44:32.470502 systemd[1]: systemd-journald.service: Consumed 1.118s CPU time. Mar 3 13:44:34.305265 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:44:34.307307 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:44:34.311589 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:44:34.315611 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:44:34.320617 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:44:34.349630 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:44:34.586367 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1127717123 wd_nsec: 1127717181 Mar 3 13:44:34.595937 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:44:34.604061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:44:34.609065 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:44:34.609703 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:44:34.616342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:44:34.616909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:44:34.624832 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:44:34.625892 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:44:34.647692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:44:34.648063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:44:34.653941 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:44:34.654488 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:44:34.659003 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:44:34.659408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:44:34.663812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:44:34.668059 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:44:34.673039 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:44:34.677898 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:44:34.705417 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:44:34.712017 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:44:34.717546 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:44:34.722632 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:44:34.723335 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:44:34.743373 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:44:34.750662 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:44:34.754400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:44:34.758850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:44:34.767287 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:44:34.771591 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:44:34.787594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:44:34.793388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:44:34.795519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:44:34.807411 systemd-journald[1200]: Time spent on flushing to /var/log/journal/d6baa9629bcc4da5994dceb517df5970 is 26.863ms for 981 entries. Mar 3 13:44:34.807411 systemd-journald[1200]: System Journal (/var/log/journal/d6baa9629bcc4da5994dceb517df5970) is 8M, max 195.6M, 187.6M free. Mar 3 13:44:34.896254 systemd-journald[1200]: Received client request to flush runtime journal. Mar 3 13:44:34.896335 kernel: loop0: detected capacity change from 0 to 110984 Mar 3 13:44:34.804024 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:44:34.823400 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:44:34.841051 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:44:34.846522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:44:34.853974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:44:34.865674 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:44:34.879520 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:44:34.888761 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:44:34.905173 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:44:34.909495 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:44:34.914410 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 3 13:44:34.916247 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 3 13:44:34.923368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:44:34.948538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:44:34.960046 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:44:34.968531 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:44:34.980169 kernel: loop1: detected capacity change from 0 to 228704 Mar 3 13:44:35.033189 kernel: loop2: detected capacity change from 0 to 128560 Mar 3 13:44:35.032651 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:44:35.042289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:44:35.077013 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 3 13:44:35.077067 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 3 13:44:35.083634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:44:35.094176 kernel: loop3: detected capacity change from 0 to 110984 Mar 3 13:44:35.104651 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:44:35.114187 kernel: loop4: detected capacity change from 0 to 228704 Mar 3 13:44:35.145527 kernel: loop5: detected capacity change from 0 to 128560 Mar 3 13:44:35.164231 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 3 13:44:35.166188 (sd-merge)[1260]: Merged extensions into '/usr'. Mar 3 13:44:35.172506 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:44:35.172652 systemd[1]: Reloading... Mar 3 13:44:35.266187 zram_generator::config[1283]: No configuration found. Mar 3 13:44:35.352231 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:44:35.530438 systemd[1]: Reloading finished in 356 ms. Mar 3 13:44:35.561604 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:44:35.567150 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:44:35.572844 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:44:35.609182 systemd[1]: Starting ensure-sysext.service... Mar 3 13:44:35.614051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:44:35.629492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:44:35.650271 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:44:35.650362 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:44:35.650944 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:44:35.651549 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:44:35.653338 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:44:35.653842 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Mar 3 13:44:35.653991 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Mar 3 13:44:35.660623 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:44:35.660667 systemd[1]: Reloading... Mar 3 13:44:35.664612 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:44:35.664645 systemd-tmpfiles[1328]: Skipping /boot Mar 3 13:44:35.685793 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:44:35.685819 systemd-tmpfiles[1328]: Skipping /boot Mar 3 13:44:35.687680 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Mar 3 13:44:35.742204 zram_generator::config[1356]: No configuration found. Mar 3 13:44:35.990193 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:44:36.028210 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 3 13:44:36.043585 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 3 13:44:36.048745 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 3 13:44:36.078221 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:44:36.127295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:44:36.133420 systemd[1]: Reloading finished in 472 ms. Mar 3 13:44:36.148574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:44:36.153691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:44:36.282660 systemd[1]: Finished ensure-sysext.service. Mar 3 13:44:36.341638 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:44:36.344795 kernel: kvm_amd: TSC scaling supported Mar 3 13:44:36.344854 kernel: kvm_amd: Nested Virtualization enabled Mar 3 13:44:36.344897 kernel: kvm_amd: Nested Paging enabled Mar 3 13:44:36.348168 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 3 13:44:36.348231 kernel: kvm_amd: PMU virtualization is disabled Mar 3 13:44:36.470942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:44:36.496071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:44:36.544373 kernel: EDAC MC: Ver: 3.0.0 Mar 3 13:44:36.544347 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:44:36.550060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:44:36.556023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:44:36.566293 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:44:36.575521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:44:36.593186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:44:36.598878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:44:36.607567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:44:36.612512 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:44:36.619509 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:44:36.652606 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:44:36.668371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:44:36.673830 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 3 13:44:36.677483 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:44:36.692180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:44:36.696351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:44:36.702201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:44:36.702522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:44:36.713875 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:44:36.720228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:44:36.748325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:44:36.748583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:44:36.758707 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:44:36.759352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:44:36.771987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:44:36.780066 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:44:36.816562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:44:36.816850 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:44:36.819011 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:44:36.849681 augenrules[1488]: No rules Mar 3 13:44:36.854538 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:44:36.860900 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:44:36.862168 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:44:36.875784 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:44:36.892501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:44:36.905260 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:44:36.960030 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:44:37.009786 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:44:37.166057 systemd-resolved[1467]: Positive Trust Anchors: Mar 3 13:44:37.166165 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:44:37.166206 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:44:37.174852 systemd-resolved[1467]: Defaulting to hostname 'linux'. Mar 3 13:44:37.190369 systemd-networkd[1464]: lo: Link UP Mar 3 13:44:37.190423 systemd-networkd[1464]: lo: Gained carrier Mar 3 13:44:37.193520 systemd-networkd[1464]: Enumeration completed Mar 3 13:44:37.194824 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:44:37.194863 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:44:37.198333 systemd-networkd[1464]: eth0: Link UP Mar 3 13:44:37.198705 systemd-networkd[1464]: eth0: Gained carrier Mar 3 13:44:37.198784 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:44:37.252323 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:44:37.254553 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Mar 3 13:44:37.257249 systemd-timesyncd[1469]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 3 13:44:37.257472 systemd-timesyncd[1469]: Initial clock synchronization to Tue 2026-03-03 13:44:37.331461 UTC. Mar 3 13:44:37.278453 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:44:37.289811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 3 13:44:37.307461 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:44:37.318401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:44:37.330557 systemd[1]: Reached target network.target - Network. Mar 3 13:44:37.351124 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:44:37.356689 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:44:37.361955 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:44:37.373047 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:44:37.382871 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:44:37.393354 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:44:37.405037 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:44:37.405333 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:44:37.409559 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:44:37.464551 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:44:37.477038 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:44:37.485036 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:44:37.502503 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:44:37.549217 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:44:37.562063 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:44:37.573590 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:44:37.583550 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:44:37.625641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:44:37.649411 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:44:37.666862 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:44:37.678639 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:44:37.687020 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:44:37.693823 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:44:37.700017 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:44:37.707208 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:44:37.709154 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:44:37.712572 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:44:37.726969 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:44:37.759011 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:44:37.767923 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:44:37.775772 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:44:37.783006 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:44:37.786231 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:44:37.789910 jq[1519]: false Mar 3 13:44:37.792332 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:44:37.801845 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:44:37.810291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:44:37.815798 extend-filesystems[1520]: Found /dev/vda6 Mar 3 13:44:37.849291 extend-filesystems[1520]: Found /dev/vda9 Mar 3 13:44:37.854312 extend-filesystems[1520]: Checking size of /dev/vda9 Mar 3 13:44:37.859519 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Mar 3 13:44:37.855401 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Mar 3 13:44:37.859941 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:44:37.872377 extend-filesystems[1520]: Resized partition /dev/vda9 Mar 3 13:44:37.876690 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:44:37.877345 extend-filesystems[1540]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:44:37.891368 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 3 13:44:37.891055 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:44:37.891514 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Mar 3 13:44:37.891514 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:44:37.891514 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Mar 3 13:44:37.884557 oslogin_cache_refresh[1521]: Failure getting users, quitting Mar 3 13:44:37.884614 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:44:37.884667 oslogin_cache_refresh[1521]: Refreshing group entry cache Mar 3 13:44:37.891870 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:44:37.892810 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:44:37.902835 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Mar 3 13:44:37.902835 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:44:37.902650 oslogin_cache_refresh[1521]: Failure getting groups, quitting Mar 3 13:44:37.902667 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:44:37.908935 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:44:37.972513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:44:37.981960 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:44:38.050200 jq[1545]: true Mar 3 13:44:37.990830 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:44:37.993596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:44:37.997485 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:44:37.999761 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:44:38.010017 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:44:38.013494 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:44:38.051004 update_engine[1543]: I20260303 13:44:38.050827 1543 main.cc:92] Flatcar Update Engine starting Mar 3 13:44:38.055220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:44:38.055814 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:44:38.078859 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 3 13:44:38.108487 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:44:38.113406 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 3 13:44:38.113406 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 3 13:44:38.113406 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 3 13:44:38.162070 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Mar 3 13:44:38.166257 jq[1552]: true Mar 3 13:44:38.119402 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:44:38.119926 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:44:38.176907 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:44:38.176955 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:44:38.189187 tar[1551]: linux-amd64/LICENSE Mar 3 13:44:38.189187 tar[1551]: linux-amd64/helm Mar 3 13:44:38.185474 systemd-logind[1542]: New seat seat0. Mar 3 13:44:38.203502 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:44:38.198529 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:44:38.223506 dbus-daemon[1517]: [system] SELinux support is enabled Mar 3 13:44:38.224963 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:44:38.256203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:44:38.256296 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:44:38.262297 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:44:38.262344 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:44:38.270265 update_engine[1543]: I20260303 13:44:38.269774 1543 update_check_scheduler.cc:74] Next update check in 6m2s Mar 3 13:44:38.270731 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:44:38.271460 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 3 13:44:38.286147 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:44:38.298234 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:44:38.300937 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:44:38.305326 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:44:38.357763 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:44:38.363817 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 3 13:44:38.411450 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:44:38.414280 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:44:38.453419 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:44:38.459622 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:44:38.493829 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:44:38.506728 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:44:38.518669 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:44:38.523287 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:44:39.507233 systemd-networkd[1464]: eth0: Gained IPv6LL Mar 3 13:44:39.517489 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:44:39.531974 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:44:39.564199 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 3 13:44:39.586356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:44:39.614632 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:44:40.683368 containerd[1559]: time="2026-03-03T13:44:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:44:40.702231 containerd[1559]: time="2026-03-03T13:44:40.687378040Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:44:40.692340 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:44:40.718759 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:45292.service - OpenSSH per-connection server daemon (10.0.0.1:45292). Mar 3 13:44:41.101918 containerd[1559]: time="2026-03-03T13:44:41.002766625Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.444µs" Mar 3 13:44:41.101918 containerd[1559]: time="2026-03-03T13:44:41.002912951Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:44:41.101918 containerd[1559]: time="2026-03-03T13:44:41.003066402Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:44:41.227657 containerd[1559]: time="2026-03-03T13:44:41.226871409Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.233241785Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.233894350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.234416615Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.234433310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.282628719Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.287586747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.287867833Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:44:41.321925 containerd[1559]: time="2026-03-03T13:44:41.287887284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:44:41.326983 containerd[1559]: time="2026-03-03T13:44:41.290772172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:44:41.334993 containerd[1559]: time="2026-03-03T13:44:41.334621199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:44:41.334993 containerd[1559]: time="2026-03-03T13:44:41.371358890Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:44:41.334993 containerd[1559]: time="2026-03-03T13:44:41.371555215Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:44:41.334993 containerd[1559]: time="2026-03-03T13:44:41.385694980Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:44:41.334993 containerd[1559]: time="2026-03-03T13:44:41.394951904Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:44:41.334993 containerd[1559]: time="2026-03-03T13:44:41.396873565Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:44:41.978738 containerd[1559]: time="2026-03-03T13:44:41.978565159Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978848598Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978878256Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978891793Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978904223Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978913928Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978924921Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978936446Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978947479Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978956359Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978964918Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:44:41.987823 containerd[1559]: time="2026-03-03T13:44:41.978978536Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:44:41.989883 containerd[1559]: time="2026-03-03T13:44:41.989797739Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.989960443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.989996296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.990015133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.990060380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.990227770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.990335573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:44:41.990360 containerd[1559]: time="2026-03-03T13:44:41.990348756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:44:41.990611 containerd[1559]: time="2026-03-03T13:44:41.990360011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:44:41.990611 containerd[1559]: time="2026-03-03T13:44:41.990438708Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:44:41.990611 containerd[1559]: time="2026-03-03T13:44:41.990449850Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:44:41.990709 containerd[1559]: time="2026-03-03T13:44:41.990570627Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:44:41.990709 containerd[1559]: time="2026-03-03T13:44:41.990664127Z" level=info msg="Start snapshots syncer" Mar 3 13:44:42.002953 containerd[1559]: time="2026-03-03T13:44:41.991279864Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:44:42.002953 containerd[1559]: time="2026-03-03T13:44:41.992308513Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:44:42.000971 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.992453486Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.992673435Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.997842451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.998179173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.998312902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.998335842Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.998429674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.998604317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:41.998679352Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:42.000526492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:42.000624283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:42.000645690Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:42.001149712Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:44:42.029072 containerd[1559]: time="2026-03-03T13:44:42.001184260Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:44:42.001749 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001201423Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001219853Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001233689Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001250983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001317727Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001415136Z" level=info msg="runtime interface created" Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001428981Z" level=info msg="created NRI interface" Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001444537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001472850Z" level=info msg="Connect containerd service" Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.001522882Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:44:42.029686 containerd[1559]: time="2026-03-03T13:44:42.004458412Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:44:42.005706 systemd[1]: coreos-metadata.service: Consumed 1.122s CPU time, 1.8M memory peak. Mar 3 13:44:42.014159 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:44:42.136579 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:44:42.498697 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 45292 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:42.502469 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:42.683486 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:44:42.693925 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:44:42.723491 systemd-logind[1542]: New session 1 of user core. Mar 3 13:44:42.762770 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:44:42.781675 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800229011Z" level=info msg="Start subscribing containerd event" Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800369405Z" level=info msg="Start recovering state" Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800577044Z" level=info msg="Start event monitor" Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800601668Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800614136Z" level=info msg="Start streaming server" Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800628585Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800680748Z" level=info msg="runtime interface starting up..." Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800691386Z" level=info msg="starting plugins..." Mar 3 13:44:42.800939 containerd[1559]: time="2026-03-03T13:44:42.800715698Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:44:42.803844 containerd[1559]: time="2026-03-03T13:44:42.803763770Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:44:42.803906 containerd[1559]: time="2026-03-03T13:44:42.803870329Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:44:42.823061 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:44:42.834899 containerd[1559]: time="2026-03-03T13:44:42.834832132Z" level=info msg="containerd successfully booted in 2.152493s" Mar 3 13:44:42.868686 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:44:42.875728 systemd-logind[1542]: New session c1 of user core. Mar 3 13:44:42.898155 tar[1551]: linux-amd64/README.md Mar 3 13:44:43.092371 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:44:43.529600 systemd[1651]: Queued start job for default target default.target. Mar 3 13:44:43.556367 systemd[1651]: Created slice app.slice - User Application Slice. Mar 3 13:44:43.556416 systemd[1651]: Reached target paths.target - Paths. Mar 3 13:44:43.556480 systemd[1651]: Reached target timers.target - Timers. Mar 3 13:44:43.572457 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:44:43.605652 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:44:43.605844 systemd[1651]: Reached target sockets.target - Sockets. Mar 3 13:44:43.606028 systemd[1651]: Reached target basic.target - Basic System. Mar 3 13:44:43.606193 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:44:43.607942 systemd[1651]: Reached target default.target - Main User Target. Mar 3 13:44:43.608039 systemd[1651]: Startup finished in 710ms. Mar 3 13:44:43.743011 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:44:43.774575 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:45638.service - OpenSSH per-connection server daemon (10.0.0.1:45638). Mar 3 13:44:43.895500 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 45638 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:43.897190 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:43.905127 systemd-logind[1542]: New session 2 of user core. Mar 3 13:44:43.911376 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:44:44.084704 sshd[1674]: Connection closed by 10.0.0.1 port 45638 Mar 3 13:44:44.086672 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:44.095606 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:45638.service: Deactivated successfully. Mar 3 13:44:44.098348 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:44:44.099666 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:44:44.102635 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Mar 3 13:44:44.104691 systemd-logind[1542]: Removed session 2. Mar 3 13:44:44.211603 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:44.213852 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:44.222222 systemd-logind[1542]: New session 3 of user core. Mar 3 13:44:44.230345 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:44:44.423976 sshd[1683]: Connection closed by 10.0.0.1 port 45652 Mar 3 13:44:44.426532 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:44.432457 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:45652.service: Deactivated successfully. Mar 3 13:44:44.436405 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:44:44.439708 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:44:44.441901 systemd-logind[1542]: Removed session 3. Mar 3 13:44:45.573489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:44:45.574459 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:44:45.575031 systemd[1]: Startup finished in 4.381s (kernel) + 16.644s (initrd) + 14.255s (userspace) = 35.281s. Mar 3 13:44:45.593738 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:44:46.739405 kubelet[1694]: E0303 13:44:46.738776 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:44:46.747309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:44:46.747723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:44:46.846945 systemd[1]: kubelet.service: Consumed 4.628s CPU time, 269M memory peak. Mar 3 13:44:54.482215 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:51340.service - OpenSSH per-connection server daemon (10.0.0.1:51340). Mar 3 13:44:54.556565 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 51340 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:54.558763 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:54.565398 systemd-logind[1542]: New session 4 of user core. Mar 3 13:44:54.577396 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:44:54.601949 sshd[1707]: Connection closed by 10.0.0.1 port 51340 Mar 3 13:44:54.602669 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:54.616476 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:51340.service: Deactivated successfully. Mar 3 13:44:54.618946 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:44:54.620400 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:44:54.624201 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:51348.service - OpenSSH per-connection server daemon (10.0.0.1:51348). Mar 3 13:44:54.625897 systemd-logind[1542]: Removed session 4. Mar 3 13:44:54.713197 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 51348 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:54.715004 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:54.723156 systemd-logind[1542]: New session 5 of user core. Mar 3 13:44:54.737423 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:44:54.751907 sshd[1716]: Connection closed by 10.0.0.1 port 51348 Mar 3 13:44:54.752633 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:54.768374 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:51348.service: Deactivated successfully. Mar 3 13:44:54.770742 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:44:54.772257 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:44:54.776634 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:51362.service - OpenSSH per-connection server daemon (10.0.0.1:51362). Mar 3 13:44:54.778812 systemd-logind[1542]: Removed session 5. Mar 3 13:44:54.874701 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 51362 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:54.876685 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:54.886426 systemd-logind[1542]: New session 6 of user core. Mar 3 13:44:54.901620 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:44:54.927239 sshd[1725]: Connection closed by 10.0.0.1 port 51362 Mar 3 13:44:54.930566 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:54.940870 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:51362.service: Deactivated successfully. Mar 3 13:44:54.947421 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:44:54.950392 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:44:54.955823 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:51366.service - OpenSSH per-connection server daemon (10.0.0.1:51366). Mar 3 13:44:54.959263 systemd-logind[1542]: Removed session 6. Mar 3 13:44:55.066399 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 51366 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:55.068845 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:55.089510 systemd-logind[1542]: New session 7 of user core. Mar 3 13:44:55.107441 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:44:55.147004 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:44:55.147558 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:44:55.186578 sudo[1735]: pam_unix(sudo:session): session closed for user root Mar 3 13:44:55.191419 sshd[1734]: Connection closed by 10.0.0.1 port 51366 Mar 3 13:44:55.192770 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:55.215359 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:51376.service - OpenSSH per-connection server daemon (10.0.0.1:51376). Mar 3 13:44:55.220465 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:51366.service: Deactivated successfully. Mar 3 13:44:55.223592 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:44:55.231484 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:44:55.235725 systemd-logind[1542]: Removed session 7. Mar 3 13:44:55.307883 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 51376 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:55.315059 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:55.328428 systemd-logind[1542]: New session 8 of user core. Mar 3 13:44:55.341136 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:44:55.365513 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:44:55.366244 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:44:55.408638 sudo[1746]: pam_unix(sudo:session): session closed for user root Mar 3 13:44:55.461451 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:44:55.465209 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:44:55.518521 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:44:55.642690 augenrules[1768]: No rules Mar 3 13:44:55.646726 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:44:55.647622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:44:55.651525 sudo[1745]: pam_unix(sudo:session): session closed for user root Mar 3 13:44:55.654294 sshd[1744]: Connection closed by 10.0.0.1 port 51376 Mar 3 13:44:55.654825 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Mar 3 13:44:55.674791 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:51376.service: Deactivated successfully. Mar 3 13:44:55.683854 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:44:55.700714 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:44:55.705833 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:51378.service - OpenSSH per-connection server daemon (10.0.0.1:51378). Mar 3 13:44:55.710994 systemd-logind[1542]: Removed session 8. Mar 3 13:44:55.870164 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 51378 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:44:55.909557 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:44:55.932406 systemd-logind[1542]: New session 9 of user core. Mar 3 13:44:55.960591 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:44:56.022666 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:44:56.023397 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:44:57.000811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:44:57.003995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:44:59.916824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:44:59.933721 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:44:59.939273 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:44:59.972532 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:45:00.550220 kubelet[1810]: E0303 13:45:00.549658 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:45:00.560934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:45:00.561362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:45:00.562283 systemd[1]: kubelet.service: Consumed 2.759s CPU time, 111.2M memory peak. Mar 3 13:45:02.718680 dockerd[1811]: time="2026-03-03T13:45:02.717661909Z" level=info msg="Starting up" Mar 3 13:45:02.720258 dockerd[1811]: time="2026-03-03T13:45:02.720217539Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:45:02.801052 dockerd[1811]: time="2026-03-03T13:45:02.800934367Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:45:03.353570 dockerd[1811]: time="2026-03-03T13:45:03.352959952Z" level=info msg="Loading containers: start." Mar 3 13:45:03.383448 kernel: Initializing XFRM netlink socket Mar 3 13:45:04.557969 systemd-networkd[1464]: docker0: Link UP Mar 3 13:45:04.575009 dockerd[1811]: time="2026-03-03T13:45:04.574224107Z" level=info msg="Loading containers: done." Mar 3 13:45:04.662039 dockerd[1811]: time="2026-03-03T13:45:04.661775422Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:45:04.662371 dockerd[1811]: time="2026-03-03T13:45:04.662135346Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:45:04.662511 dockerd[1811]: time="2026-03-03T13:45:04.662442827Z" level=info msg="Initializing buildkit" Mar 3 13:45:04.749217 dockerd[1811]: time="2026-03-03T13:45:04.749144416Z" level=info msg="Completed buildkit initialization" Mar 3 13:45:04.757789 dockerd[1811]: time="2026-03-03T13:45:04.757696149Z" level=info msg="Daemon has completed initialization" Mar 3 13:45:04.758546 dockerd[1811]: time="2026-03-03T13:45:04.758196253Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:45:04.758698 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:45:07.630063 containerd[1559]: time="2026-03-03T13:45:07.629618522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 3 13:45:08.459822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114202677.mount: Deactivated successfully. Mar 3 13:45:10.802847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 13:45:10.806780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:45:11.672207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:11.706915 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:45:12.053562 kubelet[2106]: E0303 13:45:12.053409 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:45:12.058779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:45:12.059331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:45:12.059946 systemd[1]: kubelet.service: Consumed 954ms CPU time, 110.1M memory peak. Mar 3 13:45:12.496513 containerd[1559]: time="2026-03-03T13:45:12.495885227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:12.498824 containerd[1559]: time="2026-03-03T13:45:12.498605995Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 3 13:45:12.499961 containerd[1559]: time="2026-03-03T13:45:12.499859740Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:12.505139 containerd[1559]: time="2026-03-03T13:45:12.504999791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:12.506171 containerd[1559]: time="2026-03-03T13:45:12.506032257Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 4.876364009s" Mar 3 13:45:12.506171 containerd[1559]: time="2026-03-03T13:45:12.506161025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 3 13:45:12.509696 containerd[1559]: time="2026-03-03T13:45:12.509626781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 3 13:45:14.088748 containerd[1559]: time="2026-03-03T13:45:14.088420584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:14.090192 containerd[1559]: time="2026-03-03T13:45:14.089614538Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 3 13:45:14.091306 containerd[1559]: time="2026-03-03T13:45:14.091246183Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:14.094739 containerd[1559]: time="2026-03-03T13:45:14.094679875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:14.096062 containerd[1559]: time="2026-03-03T13:45:14.095961830Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.586289841s" Mar 3 13:45:14.096062 containerd[1559]: time="2026-03-03T13:45:14.096010804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 3 13:45:14.098152 containerd[1559]: time="2026-03-03T13:45:14.098007319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 3 13:45:15.873717 containerd[1559]: time="2026-03-03T13:45:15.873468813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:15.875366 containerd[1559]: time="2026-03-03T13:45:15.874226413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 3 13:45:15.875621 containerd[1559]: time="2026-03-03T13:45:15.875554676Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:15.878474 containerd[1559]: time="2026-03-03T13:45:15.878416089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:15.879394 containerd[1559]: time="2026-03-03T13:45:15.879342110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.781179391s" Mar 3 13:45:15.879394 containerd[1559]: time="2026-03-03T13:45:15.879387444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 3 13:45:15.881263 containerd[1559]: time="2026-03-03T13:45:15.881180265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 3 13:45:17.122539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918046223.mount: Deactivated successfully. Mar 3 13:45:19.384811 containerd[1559]: time="2026-03-03T13:45:19.383369359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:19.417659 containerd[1559]: time="2026-03-03T13:45:19.392490369Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 3 13:45:19.419292 containerd[1559]: time="2026-03-03T13:45:19.419136315Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:19.425795 containerd[1559]: time="2026-03-03T13:45:19.425373332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:19.427023 containerd[1559]: time="2026-03-03T13:45:19.426380313Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 3.545168297s" Mar 3 13:45:19.427023 containerd[1559]: time="2026-03-03T13:45:19.426418368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 3 13:45:19.431668 containerd[1559]: time="2026-03-03T13:45:19.431403716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 3 13:45:19.962172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258211561.mount: Deactivated successfully. Mar 3 13:45:22.195889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 3 13:45:22.263453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:45:22.754891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:22.801768 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:45:23.308656 update_engine[1543]: I20260303 13:45:23.308200 1543 update_attempter.cc:509] Updating boot flags... Mar 3 13:45:23.553711 kubelet[2191]: E0303 13:45:23.552139 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:45:23.558514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:45:23.558733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:45:23.559313 systemd[1]: kubelet.service: Consumed 1.126s CPU time, 107.6M memory peak. Mar 3 13:45:23.769977 containerd[1559]: time="2026-03-03T13:45:23.769880576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:23.771247 containerd[1559]: time="2026-03-03T13:45:23.771180401Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 3 13:45:23.773043 containerd[1559]: time="2026-03-03T13:45:23.772905880Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:23.776419 containerd[1559]: time="2026-03-03T13:45:23.776301825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:23.778144 containerd[1559]: time="2026-03-03T13:45:23.777976263Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.346485758s" Mar 3 13:45:23.778213 containerd[1559]: time="2026-03-03T13:45:23.778159802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 3 13:45:23.780356 containerd[1559]: time="2026-03-03T13:45:23.779985585Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 3 13:45:24.326362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468090663.mount: Deactivated successfully. Mar 3 13:45:24.338608 containerd[1559]: time="2026-03-03T13:45:24.338484577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:45:24.340142 containerd[1559]: time="2026-03-03T13:45:24.339941573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 13:45:24.341726 containerd[1559]: time="2026-03-03T13:45:24.341690418Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:45:24.345332 containerd[1559]: time="2026-03-03T13:45:24.345275623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:45:24.346279 containerd[1559]: time="2026-03-03T13:45:24.346250736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 566.237474ms" Mar 3 13:45:24.346279 containerd[1559]: time="2026-03-03T13:45:24.346276772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 3 13:45:24.348189 containerd[1559]: time="2026-03-03T13:45:24.348163203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 3 13:45:24.835271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089407606.mount: Deactivated successfully. Mar 3 13:45:27.653845 containerd[1559]: time="2026-03-03T13:45:27.652740607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:27.656686 containerd[1559]: time="2026-03-03T13:45:27.654448207Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 3 13:45:27.657197 containerd[1559]: time="2026-03-03T13:45:27.657155287Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:27.665636 containerd[1559]: time="2026-03-03T13:45:27.664858988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:27.673015 containerd[1559]: time="2026-03-03T13:45:27.672810194Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.324540457s" Mar 3 13:45:27.673015 containerd[1559]: time="2026-03-03T13:45:27.672986468Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 3 13:45:30.812867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:30.813307 systemd[1]: kubelet.service: Consumed 1.126s CPU time, 107.6M memory peak. Mar 3 13:45:30.817218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:45:30.855886 systemd[1]: Reload requested from client PID 2313 ('systemctl') (unit session-9.scope)... Mar 3 13:45:30.855993 systemd[1]: Reloading... Mar 3 13:45:30.957160 zram_generator::config[2359]: No configuration found. Mar 3 13:45:31.283313 systemd[1]: Reloading finished in 426 ms. Mar 3 13:45:31.378321 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 13:45:31.378499 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 13:45:31.379051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:31.379276 systemd[1]: kubelet.service: Consumed 186ms CPU time, 98.1M memory peak. Mar 3 13:45:31.382326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:45:31.634148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:31.649655 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:45:31.710579 kubelet[2404]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:45:31.710579 kubelet[2404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:45:31.710579 kubelet[2404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:45:31.711024 kubelet[2404]: I0303 13:45:31.710643 2404 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:45:32.383217 kubelet[2404]: I0303 13:45:32.383045 2404 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 3 13:45:32.383217 kubelet[2404]: I0303 13:45:32.383224 2404 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:45:32.383968 kubelet[2404]: I0303 13:45:32.383891 2404 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:45:32.415580 kubelet[2404]: E0303 13:45:32.415430 2404 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:45:32.420971 kubelet[2404]: I0303 13:45:32.420856 2404 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:45:32.429821 kubelet[2404]: I0303 13:45:32.429723 2404 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:45:32.442886 kubelet[2404]: I0303 13:45:32.442790 2404 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 3 13:45:32.444379 kubelet[2404]: I0303 13:45:32.444277 2404 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:45:32.444615 kubelet[2404]: I0303 13:45:32.444357 2404 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:45:32.444615 kubelet[2404]: I0303 13:45:32.444609 2404 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:45:32.444884 kubelet[2404]: I0303 13:45:32.444626 2404 container_manager_linux.go:303] "Creating device plugin manager" Mar 3 13:45:32.444973 kubelet[2404]: I0303 13:45:32.444920 2404 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:45:32.449897 kubelet[2404]: I0303 13:45:32.449778 2404 kubelet.go:480] "Attempting to sync node with API server" Mar 3 13:45:32.449897 kubelet[2404]: I0303 13:45:32.449883 2404 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:45:32.450197 kubelet[2404]: I0303 13:45:32.449925 2404 kubelet.go:386] "Adding apiserver pod source" Mar 3 13:45:32.450197 kubelet[2404]: I0303 13:45:32.449953 2404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:45:32.453613 kubelet[2404]: E0303 13:45:32.453297 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:45:32.453888 kubelet[2404]: E0303 13:45:32.453858 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:45:32.456585 kubelet[2404]: I0303 13:45:32.456512 2404 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:45:32.457992 kubelet[2404]: I0303 13:45:32.457916 2404 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:45:32.459534 kubelet[2404]: W0303 13:45:32.459471 2404 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 13:45:32.466971 kubelet[2404]: I0303 13:45:32.466926 2404 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 3 13:45:32.467042 kubelet[2404]: I0303 13:45:32.467004 2404 server.go:1289] "Started kubelet" Mar 3 13:45:32.470267 kubelet[2404]: I0303 13:45:32.468185 2404 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:45:32.470267 kubelet[2404]: I0303 13:45:32.468533 2404 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:45:32.470267 kubelet[2404]: I0303 13:45:32.468565 2404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:45:32.470267 kubelet[2404]: I0303 13:45:32.468607 2404 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:45:32.470267 kubelet[2404]: I0303 13:45:32.469526 2404 server.go:317] "Adding debug handlers to kubelet server" Mar 3 13:45:32.470702 kubelet[2404]: I0303 13:45:32.470687 2404 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:45:32.471733 kubelet[2404]: E0303 13:45:32.471672 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:45:32.471805 kubelet[2404]: I0303 13:45:32.471771 2404 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 3 13:45:32.472072 kubelet[2404]: I0303 13:45:32.472004 2404 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 3 13:45:32.472388 kubelet[2404]: I0303 13:45:32.472343 2404 reconciler.go:26] "Reconciler: start to sync state" Mar 3 13:45:32.472752 kubelet[2404]: E0303 13:45:32.472707 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:45:32.473003 kubelet[2404]: E0303 13:45:32.472943 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Mar 3 13:45:32.474702 kubelet[2404]: E0303 13:45:32.471187 2404 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189958c53ae61153 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:45:32.466958675 +0000 UTC m=+0.809739814,LastTimestamp:2026-03-03 13:45:32.466958675 +0000 UTC m=+0.809739814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:45:32.474702 kubelet[2404]: E0303 13:45:32.474262 2404 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:45:32.476429 kubelet[2404]: I0303 13:45:32.475796 2404 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:45:32.476429 kubelet[2404]: I0303 13:45:32.475810 2404 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:45:32.476429 kubelet[2404]: I0303 13:45:32.475882 2404 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:45:32.497482 kubelet[2404]: I0303 13:45:32.496926 2404 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:45:32.497482 kubelet[2404]: I0303 13:45:32.496947 2404 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:45:32.497482 kubelet[2404]: I0303 13:45:32.496963 2404 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:45:32.573057 kubelet[2404]: E0303 13:45:32.572871 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:45:32.581710 kubelet[2404]: I0303 13:45:32.581510 2404 policy_none.go:49] "None policy: Start" Mar 3 13:45:32.581710 kubelet[2404]: I0303 13:45:32.581656 2404 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 3 13:45:32.581710 kubelet[2404]: I0303 13:45:32.581710 2404 state_mem.go:35] "Initializing new in-memory state store" Mar 3 13:45:32.586242 kubelet[2404]: I0303 13:45:32.586155 2404 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 3 13:45:32.589393 kubelet[2404]: I0303 13:45:32.589243 2404 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 3 13:45:32.589393 kubelet[2404]: I0303 13:45:32.589293 2404 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 3 13:45:32.589393 kubelet[2404]: I0303 13:45:32.589357 2404 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:45:32.589393 kubelet[2404]: I0303 13:45:32.589372 2404 kubelet.go:2436] "Starting kubelet main sync loop" Mar 3 13:45:32.589562 kubelet[2404]: E0303 13:45:32.589427 2404 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:45:32.592020 kubelet[2404]: E0303 13:45:32.591983 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:45:32.596705 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 13:45:32.618749 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 13:45:32.623886 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 13:45:32.633523 kubelet[2404]: E0303 13:45:32.633345 2404 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:45:32.634528 kubelet[2404]: I0303 13:45:32.634468 2404 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:45:32.634528 kubelet[2404]: I0303 13:45:32.634507 2404 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:45:32.634975 kubelet[2404]: I0303 13:45:32.634812 2404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:45:32.637233 kubelet[2404]: E0303 13:45:32.637143 2404 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:45:32.637702 kubelet[2404]: E0303 13:45:32.637392 2404 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 13:45:32.673911 kubelet[2404]: E0303 13:45:32.673825 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Mar 3 13:45:32.707836 systemd[1]: Created slice kubepods-burstable-pod7b09405a13e988e25ff3ffc583ed89a5.slice - libcontainer container kubepods-burstable-pod7b09405a13e988e25ff3ffc583ed89a5.slice. Mar 3 13:45:32.727599 kubelet[2404]: E0303 13:45:32.727506 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:32.731904 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 3 13:45:32.736425 kubelet[2404]: I0303 13:45:32.736281 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:45:32.736840 kubelet[2404]: E0303 13:45:32.736753 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 3 13:45:32.742268 kubelet[2404]: E0303 13:45:32.742055 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:32.745859 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 3 13:45:32.748967 kubelet[2404]: E0303 13:45:32.748849 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:32.774135 kubelet[2404]: I0303 13:45:32.774021 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:32.774318 kubelet[2404]: I0303 13:45:32.774180 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:32.774318 kubelet[2404]: I0303 13:45:32.774253 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:32.774318 kubelet[2404]: I0303 13:45:32.774279 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:32.774318 kubelet[2404]: I0303 13:45:32.774306 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:32.774585 kubelet[2404]: I0303 13:45:32.774331 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:32.774585 kubelet[2404]: I0303 13:45:32.774356 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:32.774585 kubelet[2404]: I0303 13:45:32.774384 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:32.774585 kubelet[2404]: I0303 13:45:32.774411 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:32.939789 kubelet[2404]: I0303 13:45:32.939655 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:45:32.940277 kubelet[2404]: E0303 13:45:32.940170 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 3 13:45:33.029134 kubelet[2404]: E0303 13:45:33.029006 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:33.030685 containerd[1559]: time="2026-03-03T13:45:33.030565466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b09405a13e988e25ff3ffc583ed89a5,Namespace:kube-system,Attempt:0,}" Mar 3 13:45:33.043050 kubelet[2404]: E0303 13:45:33.042921 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:33.043852 containerd[1559]: time="2026-03-03T13:45:33.043804348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 3 13:45:33.050781 kubelet[2404]: E0303 13:45:33.050545 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:33.051431 containerd[1559]: time="2026-03-03T13:45:33.051321013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 3 13:45:33.075389 kubelet[2404]: E0303 13:45:33.075297 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Mar 3 13:45:33.093802 containerd[1559]: time="2026-03-03T13:45:33.093685070Z" level=info msg="connecting to shim 24b9c846205a87d1ea048c1a4778fd66ee57cf84f3dad5d33e704b46b45dc645" address="unix:///run/containerd/s/d95c5732d88fda66511cc16677c893f72b1cf160a3ffe2a03562772f6de4c27f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:45:33.137312 containerd[1559]: time="2026-03-03T13:45:33.137039469Z" level=info msg="connecting to shim ec860143d6dedbbdb41831aee34f856257f16762b3d0185c82103147d2b98230" address="unix:///run/containerd/s/5ab45bf32ae34076eba633005288b6578b63fab82a6c8782f3257378bdee8ffb" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:45:33.152219 containerd[1559]: time="2026-03-03T13:45:33.152004417Z" level=info msg="connecting to shim 69737c9476c40b3df89f480131dfb72557cb655a960534ea179f7a6863e6632f" address="unix:///run/containerd/s/40988b59e4efc3f6fc98a32d22a32e5de3e584fc4e6b158af3d5cb907bc208ed" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:45:33.219531 systemd[1]: Started cri-containerd-69737c9476c40b3df89f480131dfb72557cb655a960534ea179f7a6863e6632f.scope - libcontainer container 69737c9476c40b3df89f480131dfb72557cb655a960534ea179f7a6863e6632f. Mar 3 13:45:33.226691 systemd[1]: Started cri-containerd-ec860143d6dedbbdb41831aee34f856257f16762b3d0185c82103147d2b98230.scope - libcontainer container ec860143d6dedbbdb41831aee34f856257f16762b3d0185c82103147d2b98230. Mar 3 13:45:33.279426 systemd[1]: Started cri-containerd-24b9c846205a87d1ea048c1a4778fd66ee57cf84f3dad5d33e704b46b45dc645.scope - libcontainer container 24b9c846205a87d1ea048c1a4778fd66ee57cf84f3dad5d33e704b46b45dc645. Mar 3 13:45:33.286420 kubelet[2404]: E0303 13:45:33.286343 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:45:33.346365 kubelet[2404]: I0303 13:45:33.346212 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:45:33.349187 kubelet[2404]: E0303 13:45:33.348976 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 3 13:45:33.400609 containerd[1559]: time="2026-03-03T13:45:33.400017493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"69737c9476c40b3df89f480131dfb72557cb655a960534ea179f7a6863e6632f\"" Mar 3 13:45:33.404708 kubelet[2404]: E0303 13:45:33.404518 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:33.411707 containerd[1559]: time="2026-03-03T13:45:33.411673694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec860143d6dedbbdb41831aee34f856257f16762b3d0185c82103147d2b98230\"" Mar 3 13:45:33.412767 kubelet[2404]: E0303 13:45:33.412690 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:33.416629 containerd[1559]: time="2026-03-03T13:45:33.416593771Z" level=info msg="CreateContainer within sandbox \"69737c9476c40b3df89f480131dfb72557cb655a960534ea179f7a6863e6632f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 13:45:33.418556 containerd[1559]: time="2026-03-03T13:45:33.418523231Z" level=info msg="CreateContainer within sandbox \"ec860143d6dedbbdb41831aee34f856257f16762b3d0185c82103147d2b98230\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 13:45:33.449775 kubelet[2404]: E0303 13:45:33.449674 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:45:33.461832 containerd[1559]: time="2026-03-03T13:45:33.461745454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b09405a13e988e25ff3ffc583ed89a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"24b9c846205a87d1ea048c1a4778fd66ee57cf84f3dad5d33e704b46b45dc645\"" Mar 3 13:45:33.462917 kubelet[2404]: E0303 13:45:33.462823 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:33.471334 containerd[1559]: time="2026-03-03T13:45:33.470652172Z" level=info msg="CreateContainer within sandbox \"24b9c846205a87d1ea048c1a4778fd66ee57cf84f3dad5d33e704b46b45dc645\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 13:45:33.473474 containerd[1559]: time="2026-03-03T13:45:33.473388510Z" level=info msg="Container ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:33.475626 containerd[1559]: time="2026-03-03T13:45:33.475574099Z" level=info msg="Container 41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:33.493944 containerd[1559]: time="2026-03-03T13:45:33.493905465Z" level=info msg="CreateContainer within sandbox \"69737c9476c40b3df89f480131dfb72557cb655a960534ea179f7a6863e6632f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72\"" Mar 3 13:45:33.496798 containerd[1559]: time="2026-03-03T13:45:33.496498525Z" level=info msg="StartContainer for \"ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72\"" Mar 3 13:45:33.498690 containerd[1559]: time="2026-03-03T13:45:33.498218465Z" level=info msg="Container ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:33.499907 containerd[1559]: time="2026-03-03T13:45:33.499844219Z" level=info msg="connecting to shim ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72" address="unix:///run/containerd/s/40988b59e4efc3f6fc98a32d22a32e5de3e584fc4e6b158af3d5cb907bc208ed" protocol=ttrpc version=3 Mar 3 13:45:33.509825 containerd[1559]: time="2026-03-03T13:45:33.509743265Z" level=info msg="CreateContainer within sandbox \"ec860143d6dedbbdb41831aee34f856257f16762b3d0185c82103147d2b98230\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5\"" Mar 3 13:45:33.517145 containerd[1559]: time="2026-03-03T13:45:33.517033923Z" level=info msg="CreateContainer within sandbox \"24b9c846205a87d1ea048c1a4778fd66ee57cf84f3dad5d33e704b46b45dc645\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9\"" Mar 3 13:45:33.517792 containerd[1559]: time="2026-03-03T13:45:33.517729285Z" level=info msg="StartContainer for \"41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5\"" Mar 3 13:45:33.518679 containerd[1559]: time="2026-03-03T13:45:33.518585797Z" level=info msg="StartContainer for \"ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9\"" Mar 3 13:45:33.520409 containerd[1559]: time="2026-03-03T13:45:33.520337320Z" level=info msg="connecting to shim ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9" address="unix:///run/containerd/s/d95c5732d88fda66511cc16677c893f72b1cf160a3ffe2a03562772f6de4c27f" protocol=ttrpc version=3 Mar 3 13:45:33.539205 systemd[1]: Started cri-containerd-ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72.scope - libcontainer container ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72. Mar 3 13:45:33.539601 kubelet[2404]: E0303 13:45:33.539550 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:45:33.539867 containerd[1559]: time="2026-03-03T13:45:33.539725453Z" level=info msg="connecting to shim 41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5" address="unix:///run/containerd/s/5ab45bf32ae34076eba633005288b6578b63fab82a6c8782f3257378bdee8ffb" protocol=ttrpc version=3 Mar 3 13:45:33.598655 systemd[1]: Started cri-containerd-ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9.scope - libcontainer container ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9. Mar 3 13:45:33.604792 systemd[1]: Started cri-containerd-41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5.scope - libcontainer container 41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5. Mar 3 13:45:33.682149 containerd[1559]: time="2026-03-03T13:45:33.682040754Z" level=info msg="StartContainer for \"ddd6c988156afa9519ec4338cad0130d63f2ece368653dfd727a57ee67a4ad72\" returns successfully" Mar 3 13:45:33.740748 containerd[1559]: time="2026-03-03T13:45:33.740183572Z" level=info msg="StartContainer for \"41818229602bc162723630cb488aa662fae25254348f4bc9f2fdb9e0fa25c1a5\" returns successfully" Mar 3 13:45:33.770455 containerd[1559]: time="2026-03-03T13:45:33.770210766Z" level=info msg="StartContainer for \"ebd01a823c57d753a37862c9e84490eaab3e77c14db38462c8fe14f07f491da9\" returns successfully" Mar 3 13:45:33.878461 kubelet[2404]: E0303 13:45:33.878301 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Mar 3 13:45:34.154067 kubelet[2404]: I0303 13:45:34.153750 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:45:34.636817 kubelet[2404]: E0303 13:45:34.636771 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:34.638156 kubelet[2404]: E0303 13:45:34.636992 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:34.642318 kubelet[2404]: E0303 13:45:34.642235 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:34.642574 kubelet[2404]: E0303 13:45:34.642425 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:34.649017 kubelet[2404]: E0303 13:45:34.648901 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:34.649357 kubelet[2404]: E0303 13:45:34.649278 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:35.485209 kubelet[2404]: E0303 13:45:35.485014 2404 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 3 13:45:35.582788 kubelet[2404]: I0303 13:45:35.582677 2404 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:45:35.582788 kubelet[2404]: E0303 13:45:35.582759 2404 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 3 13:45:35.603824 kubelet[2404]: E0303 13:45:35.603718 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:45:35.656178 kubelet[2404]: E0303 13:45:35.655471 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:35.656178 kubelet[2404]: E0303 13:45:35.655666 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:35.656178 kubelet[2404]: E0303 13:45:35.655934 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:35.656178 kubelet[2404]: E0303 13:45:35.656041 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:35.657024 kubelet[2404]: E0303 13:45:35.656999 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:45:35.659444 kubelet[2404]: E0303 13:45:35.659360 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:35.704254 kubelet[2404]: E0303 13:45:35.704175 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:45:35.805543 kubelet[2404]: E0303 13:45:35.805245 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:45:35.873746 kubelet[2404]: I0303 13:45:35.873574 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:35.884397 kubelet[2404]: E0303 13:45:35.884262 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:35.884397 kubelet[2404]: I0303 13:45:35.884320 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:35.886902 kubelet[2404]: E0303 13:45:35.886836 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:35.886902 kubelet[2404]: I0303 13:45:35.886885 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:35.889110 kubelet[2404]: E0303 13:45:35.889015 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:36.456095 kubelet[2404]: I0303 13:45:36.455956 2404 apiserver.go:52] "Watching apiserver" Mar 3 13:45:36.472404 kubelet[2404]: I0303 13:45:36.472349 2404 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 3 13:45:36.653860 kubelet[2404]: I0303 13:45:36.653797 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:36.654398 kubelet[2404]: I0303 13:45:36.653967 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:36.666374 kubelet[2404]: E0303 13:45:36.665601 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:36.667816 kubelet[2404]: E0303 13:45:36.667768 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:37.601457 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-9.scope)... Mar 3 13:45:37.601514 systemd[1]: Reloading... Mar 3 13:45:37.656515 kubelet[2404]: E0303 13:45:37.655904 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:37.656515 kubelet[2404]: E0303 13:45:37.656167 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:37.698267 zram_generator::config[2724]: No configuration found. Mar 3 13:45:37.968278 systemd[1]: Reloading finished in 365 ms. Mar 3 13:45:38.012578 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:45:38.030991 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 13:45:38.031438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:38.031532 systemd[1]: kubelet.service: Consumed 1.565s CPU time, 131.5M memory peak. Mar 3 13:45:38.035293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:45:38.283977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:45:38.311949 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:45:38.371254 kubelet[2772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:45:38.371254 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:45:38.371254 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:45:38.372178 kubelet[2772]: I0303 13:45:38.371322 2772 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:45:38.381153 kubelet[2772]: I0303 13:45:38.380593 2772 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 3 13:45:38.381153 kubelet[2772]: I0303 13:45:38.380617 2772 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:45:38.381153 kubelet[2772]: I0303 13:45:38.380852 2772 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:45:38.382931 kubelet[2772]: I0303 13:45:38.382855 2772 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 13:45:38.386812 kubelet[2772]: I0303 13:45:38.386737 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:45:38.396560 kubelet[2772]: I0303 13:45:38.396481 2772 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:45:38.405862 kubelet[2772]: I0303 13:45:38.405738 2772 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 3 13:45:38.406294 kubelet[2772]: I0303 13:45:38.406203 2772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:45:38.406489 kubelet[2772]: I0303 13:45:38.406254 2772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:45:38.406489 kubelet[2772]: I0303 13:45:38.406482 2772 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:45:38.406489 kubelet[2772]: I0303 13:45:38.406492 2772 container_manager_linux.go:303] "Creating device plugin manager" Mar 3 13:45:38.406643 kubelet[2772]: I0303 13:45:38.406544 2772 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:45:38.406941 kubelet[2772]: I0303 13:45:38.406818 2772 kubelet.go:480] "Attempting to sync node with API server" Mar 3 13:45:38.407001 kubelet[2772]: I0303 13:45:38.406984 2772 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:45:38.407044 kubelet[2772]: I0303 13:45:38.407013 2772 kubelet.go:386] "Adding apiserver pod source" Mar 3 13:45:38.407044 kubelet[2772]: I0303 13:45:38.407028 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:45:38.411257 kubelet[2772]: I0303 13:45:38.411149 2772 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:45:38.411928 kubelet[2772]: I0303 13:45:38.411839 2772 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:45:38.427191 kubelet[2772]: I0303 13:45:38.426514 2772 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 3 13:45:38.427191 kubelet[2772]: I0303 13:45:38.426598 2772 server.go:1289] "Started kubelet" Mar 3 13:45:38.427191 kubelet[2772]: I0303 13:45:38.426657 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:45:38.427711 kubelet[2772]: I0303 13:45:38.427658 2772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:45:38.428009 kubelet[2772]: I0303 13:45:38.427917 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:45:38.431194 kubelet[2772]: I0303 13:45:38.429819 2772 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:45:38.431194 kubelet[2772]: I0303 13:45:38.430320 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:45:38.431194 kubelet[2772]: I0303 13:45:38.428956 2772 server.go:317] "Adding debug handlers to kubelet server" Mar 3 13:45:38.438589 kubelet[2772]: I0303 13:45:38.438432 2772 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 3 13:45:38.439645 kubelet[2772]: I0303 13:45:38.439574 2772 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 3 13:45:38.440793 kubelet[2772]: I0303 13:45:38.440465 2772 reconciler.go:26] "Reconciler: start to sync state" Mar 3 13:45:38.445005 kubelet[2772]: I0303 13:45:38.444594 2772 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:45:38.445005 kubelet[2772]: I0303 13:45:38.444639 2772 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:45:38.445005 kubelet[2772]: I0303 13:45:38.444740 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:45:38.448005 kubelet[2772]: E0303 13:45:38.447732 2772 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:45:38.485290 kubelet[2772]: I0303 13:45:38.485004 2772 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 3 13:45:38.491067 kubelet[2772]: I0303 13:45:38.490220 2772 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 3 13:45:38.491067 kubelet[2772]: I0303 13:45:38.490251 2772 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 3 13:45:38.491067 kubelet[2772]: I0303 13:45:38.490277 2772 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:45:38.491067 kubelet[2772]: I0303 13:45:38.490342 2772 kubelet.go:2436] "Starting kubelet main sync loop" Mar 3 13:45:38.491067 kubelet[2772]: E0303 13:45:38.490405 2772 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:45:38.544486 kubelet[2772]: I0303 13:45:38.544249 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:45:38.544486 kubelet[2772]: I0303 13:45:38.544342 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:45:38.544486 kubelet[2772]: I0303 13:45:38.544372 2772 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:45:38.544688 kubelet[2772]: I0303 13:45:38.544529 2772 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 13:45:38.544688 kubelet[2772]: I0303 13:45:38.544540 2772 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 13:45:38.544688 kubelet[2772]: I0303 13:45:38.544559 2772 policy_none.go:49] "None policy: Start" Mar 3 13:45:38.544688 kubelet[2772]: I0303 13:45:38.544570 2772 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 3 13:45:38.544688 kubelet[2772]: I0303 13:45:38.544583 2772 state_mem.go:35] "Initializing new in-memory state store" Mar 3 13:45:38.544688 kubelet[2772]: I0303 13:45:38.544674 2772 state_mem.go:75] "Updated machine memory state" Mar 3 13:45:38.553235 kubelet[2772]: E0303 13:45:38.553190 2772 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:45:38.553683 kubelet[2772]: I0303 13:45:38.553358 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:45:38.553683 kubelet[2772]: I0303 13:45:38.553369 2772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:45:38.553778 kubelet[2772]: I0303 13:45:38.553736 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:45:38.559334 kubelet[2772]: E0303 13:45:38.559263 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:45:38.591287 kubelet[2772]: I0303 13:45:38.591219 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:38.591764 kubelet[2772]: I0303 13:45:38.591715 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:38.593264 kubelet[2772]: I0303 13:45:38.591850 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:38.600955 kubelet[2772]: E0303 13:45:38.600852 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:38.601212 kubelet[2772]: E0303 13:45:38.601165 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:38.608523 sudo[2814]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 3 13:45:38.609424 sudo[2814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 3 13:45:38.641163 kubelet[2772]: I0303 13:45:38.640868 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:38.641163 kubelet[2772]: I0303 13:45:38.641011 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:38.641163 kubelet[2772]: I0303 13:45:38.641051 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:38.641163 kubelet[2772]: I0303 13:45:38.641161 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:38.641467 kubelet[2772]: I0303 13:45:38.641199 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:38.641467 kubelet[2772]: I0303 13:45:38.641219 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:38.641467 kubelet[2772]: I0303 13:45:38.641239 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:38.641467 kubelet[2772]: I0303 13:45:38.641262 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:38.641467 kubelet[2772]: I0303 13:45:38.641283 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:38.667525 kubelet[2772]: I0303 13:45:38.667472 2772 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:45:38.681443 kubelet[2772]: I0303 13:45:38.681401 2772 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 3 13:45:38.681443 kubelet[2772]: I0303 13:45:38.681474 2772 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:45:38.900534 kubelet[2772]: E0303 13:45:38.899602 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:38.901647 kubelet[2772]: E0303 13:45:38.901593 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:38.901694 kubelet[2772]: E0303 13:45:38.901683 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:39.127523 sudo[2814]: pam_unix(sudo:session): session closed for user root Mar 3 13:45:39.409948 kubelet[2772]: I0303 13:45:39.409404 2772 apiserver.go:52] "Watching apiserver" Mar 3 13:45:39.440816 kubelet[2772]: I0303 13:45:39.440736 2772 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 3 13:45:39.526501 kubelet[2772]: I0303 13:45:39.526458 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:39.526955 kubelet[2772]: I0303 13:45:39.526751 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:39.528326 kubelet[2772]: I0303 13:45:39.526071 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:39.537937 kubelet[2772]: E0303 13:45:39.537859 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 13:45:39.538489 kubelet[2772]: E0303 13:45:39.538446 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:39.541326 kubelet[2772]: E0303 13:45:39.541276 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 3 13:45:39.541474 kubelet[2772]: E0303 13:45:39.541423 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:39.547324 kubelet[2772]: E0303 13:45:39.546728 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:45:39.547324 kubelet[2772]: E0303 13:45:39.547184 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:39.652328 kubelet[2772]: I0303 13:45:39.652233 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.652211954 podStartE2EDuration="3.652211954s" podCreationTimestamp="2026-03-03 13:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:45:39.651790379 +0000 UTC m=+1.331305985" watchObservedRunningTime="2026-03-03 13:45:39.652211954 +0000 UTC m=+1.331727559" Mar 3 13:45:39.652846 kubelet[2772]: I0303 13:45:39.652384 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6523717470000001 podStartE2EDuration="1.652371747s" podCreationTimestamp="2026-03-03 13:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:45:39.640057169 +0000 UTC m=+1.319572775" watchObservedRunningTime="2026-03-03 13:45:39.652371747 +0000 UTC m=+1.331887352" Mar 3 13:45:39.663207 kubelet[2772]: I0303 13:45:39.662814 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.662794843 podStartE2EDuration="3.662794843s" podCreationTimestamp="2026-03-03 13:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:45:39.662740121 +0000 UTC m=+1.342255727" watchObservedRunningTime="2026-03-03 13:45:39.662794843 +0000 UTC m=+1.342310448" Mar 3 13:45:40.530464 kubelet[2772]: E0303 13:45:40.530341 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:40.530464 kubelet[2772]: E0303 13:45:40.530409 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:40.531604 kubelet[2772]: E0303 13:45:40.531544 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:40.566581 sudo[1781]: pam_unix(sudo:session): session closed for user root Mar 3 13:45:40.568168 sshd[1780]: Connection closed by 10.0.0.1 port 51378 Mar 3 13:45:40.568785 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Mar 3 13:45:40.574247 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:51378.service: Deactivated successfully. Mar 3 13:45:40.577333 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 13:45:40.577733 systemd[1]: session-9.scope: Consumed 9.950s CPU time, 274.8M memory peak. Mar 3 13:45:40.579856 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Mar 3 13:45:40.582229 systemd-logind[1542]: Removed session 9. Mar 3 13:45:41.531387 kubelet[2772]: E0303 13:45:41.531289 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:41.531795 kubelet[2772]: E0303 13:45:41.531404 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:43.145779 kubelet[2772]: I0303 13:45:43.145676 2772 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 13:45:43.146589 containerd[1559]: time="2026-03-03T13:45:43.146364551Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 13:45:43.146915 kubelet[2772]: I0303 13:45:43.146876 2772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 13:45:43.939017 systemd[1]: Created slice kubepods-besteffort-podd2230065_4188_487d_9a41_5a02aff0ec57.slice - libcontainer container kubepods-besteffort-podd2230065_4188_487d_9a41_5a02aff0ec57.slice. Mar 3 13:45:43.960796 systemd[1]: Created slice kubepods-burstable-pod21f3a3df_63ae_4df2_aac7_995ab3d2e8b1.slice - libcontainer container kubepods-burstable-pod21f3a3df_63ae_4df2_aac7_995ab3d2e8b1.slice. Mar 3 13:45:43.982803 kubelet[2772]: I0303 13:45:43.982655 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-config-path\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.982803 kubelet[2772]: I0303 13:45:43.982712 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2230065-4188-487d-9a41-5a02aff0ec57-lib-modules\") pod \"kube-proxy-gfgt5\" (UID: \"d2230065-4188-487d-9a41-5a02aff0ec57\") " pod="kube-system/kube-proxy-gfgt5" Mar 3 13:45:43.982803 kubelet[2772]: I0303 13:45:43.982728 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hostproc\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.982803 kubelet[2772]: I0303 13:45:43.982740 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cni-path\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.982803 kubelet[2772]: I0303 13:45:43.982753 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-net\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.982803 kubelet[2772]: I0303 13:45:43.982765 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hubble-tls\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983290 kubelet[2772]: I0303 13:45:43.982778 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2230065-4188-487d-9a41-5a02aff0ec57-kube-proxy\") pod \"kube-proxy-gfgt5\" (UID: \"d2230065-4188-487d-9a41-5a02aff0ec57\") " pod="kube-system/kube-proxy-gfgt5" Mar 3 13:45:43.983290 kubelet[2772]: I0303 13:45:43.982792 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l77x\" (UniqueName: \"kubernetes.io/projected/d2230065-4188-487d-9a41-5a02aff0ec57-kube-api-access-4l77x\") pod \"kube-proxy-gfgt5\" (UID: \"d2230065-4188-487d-9a41-5a02aff0ec57\") " pod="kube-system/kube-proxy-gfgt5" Mar 3 13:45:43.983290 kubelet[2772]: I0303 13:45:43.982806 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-run\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983290 kubelet[2772]: I0303 13:45:43.982818 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-clustermesh-secrets\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983290 kubelet[2772]: I0303 13:45:43.982830 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk8w\" (UniqueName: \"kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-kube-api-access-rsk8w\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983492 kubelet[2772]: I0303 13:45:43.982845 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2230065-4188-487d-9a41-5a02aff0ec57-xtables-lock\") pod \"kube-proxy-gfgt5\" (UID: \"d2230065-4188-487d-9a41-5a02aff0ec57\") " pod="kube-system/kube-proxy-gfgt5" Mar 3 13:45:43.983492 kubelet[2772]: I0303 13:45:43.982925 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-bpf-maps\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983492 kubelet[2772]: I0303 13:45:43.982975 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-lib-modules\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983492 kubelet[2772]: I0303 13:45:43.983009 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-kernel\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983492 kubelet[2772]: I0303 13:45:43.983034 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-cgroup\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983492 kubelet[2772]: I0303 13:45:43.983055 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-etc-cni-netd\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:43.983917 kubelet[2772]: I0303 13:45:43.983167 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-xtables-lock\") pod \"cilium-2cbpw\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " pod="kube-system/cilium-2cbpw" Mar 3 13:45:44.104377 kubelet[2772]: E0303 13:45:44.104139 2772 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 3 13:45:44.104377 kubelet[2772]: E0303 13:45:44.104307 2772 projected.go:194] Error preparing data for projected volume kube-api-access-rsk8w for pod kube-system/cilium-2cbpw: configmap "kube-root-ca.crt" not found Mar 3 13:45:44.104634 kubelet[2772]: E0303 13:45:44.104552 2772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-kube-api-access-rsk8w podName:21f3a3df-63ae-4df2-aac7-995ab3d2e8b1 nodeName:}" failed. No retries permitted until 2026-03-03 13:45:44.604429884 +0000 UTC m=+6.283945490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rsk8w" (UniqueName: "kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-kube-api-access-rsk8w") pod "cilium-2cbpw" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1") : configmap "kube-root-ca.crt" not found Mar 3 13:45:44.110792 kubelet[2772]: E0303 13:45:44.110725 2772 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 3 13:45:44.110939 kubelet[2772]: E0303 13:45:44.110883 2772 projected.go:194] Error preparing data for projected volume kube-api-access-4l77x for pod kube-system/kube-proxy-gfgt5: configmap "kube-root-ca.crt" not found Mar 3 13:45:44.111351 kubelet[2772]: E0303 13:45:44.111204 2772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d2230065-4188-487d-9a41-5a02aff0ec57-kube-api-access-4l77x podName:d2230065-4188-487d-9a41-5a02aff0ec57 nodeName:}" failed. No retries permitted until 2026-03-03 13:45:44.611181443 +0000 UTC m=+6.290697049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4l77x" (UniqueName: "kubernetes.io/projected/d2230065-4188-487d-9a41-5a02aff0ec57-kube-api-access-4l77x") pod "kube-proxy-gfgt5" (UID: "d2230065-4188-487d-9a41-5a02aff0ec57") : configmap "kube-root-ca.crt" not found Mar 3 13:45:44.411413 systemd[1]: Created slice kubepods-besteffort-podb606ab69_5bb0_4d0b_a8ea_ea4124d340c6.slice - libcontainer container kubepods-besteffort-podb606ab69_5bb0_4d0b_a8ea_ea4124d340c6.slice. Mar 3 13:45:44.489501 kubelet[2772]: I0303 13:45:44.489345 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qbmdk\" (UID: \"b606ab69-5bb0-4d0b-a8ea-ea4124d340c6\") " pod="kube-system/cilium-operator-6c4d7847fc-qbmdk" Mar 3 13:45:44.489501 kubelet[2772]: I0303 13:45:44.489416 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpx7\" (UniqueName: \"kubernetes.io/projected/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-kube-api-access-8gpx7\") pod \"cilium-operator-6c4d7847fc-qbmdk\" (UID: \"b606ab69-5bb0-4d0b-a8ea-ea4124d340c6\") " pod="kube-system/cilium-operator-6c4d7847fc-qbmdk" Mar 3 13:45:44.721003 kubelet[2772]: E0303 13:45:44.720903 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:44.721742 containerd[1559]: time="2026-03-03T13:45:44.721667208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qbmdk,Uid:b606ab69-5bb0-4d0b-a8ea-ea4124d340c6,Namespace:kube-system,Attempt:0,}" Mar 3 13:45:44.759197 containerd[1559]: time="2026-03-03T13:45:44.758960871Z" level=info msg="connecting to shim 4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c" address="unix:///run/containerd/s/524135db95768ba58f4b4557b0960a202caed503fc13ae4bf42e28b6f95096c4" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:45:44.805467 systemd[1]: Started cri-containerd-4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c.scope - libcontainer container 4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c. Mar 3 13:45:44.858816 kubelet[2772]: E0303 13:45:44.858724 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:44.859653 containerd[1559]: time="2026-03-03T13:45:44.859560419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gfgt5,Uid:d2230065-4188-487d-9a41-5a02aff0ec57,Namespace:kube-system,Attempt:0,}" Mar 3 13:45:44.871052 kubelet[2772]: E0303 13:45:44.871000 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:44.871767 containerd[1559]: time="2026-03-03T13:45:44.871710512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cbpw,Uid:21f3a3df-63ae-4df2-aac7-995ab3d2e8b1,Namespace:kube-system,Attempt:0,}" Mar 3 13:45:44.891488 containerd[1559]: time="2026-03-03T13:45:44.891387195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qbmdk,Uid:b606ab69-5bb0-4d0b-a8ea-ea4124d340c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\"" Mar 3 13:45:44.892911 kubelet[2772]: E0303 13:45:44.892778 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:44.903202 containerd[1559]: time="2026-03-03T13:45:44.902711454Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 3 13:45:44.909624 containerd[1559]: time="2026-03-03T13:45:44.909549983Z" level=info msg="connecting to shim 9e422126429ec691211171f2a6f0ff3dd82276c5abbca7e1aac98c0f9daaa965" address="unix:///run/containerd/s/dcf87af389bd8dbda52921dabb07e69800ae58ddf420b2f505e42a650753c488" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:45:44.918969 containerd[1559]: time="2026-03-03T13:45:44.918929513Z" level=info msg="connecting to shim 5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec" address="unix:///run/containerd/s/afe4ca79efed817fe29eb6d81fd4bcb22877f11df1f86732a430c1fa7300154f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:45:44.974427 systemd[1]: Started cri-containerd-5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec.scope - libcontainer container 5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec. Mar 3 13:45:44.978933 systemd[1]: Started cri-containerd-9e422126429ec691211171f2a6f0ff3dd82276c5abbca7e1aac98c0f9daaa965.scope - libcontainer container 9e422126429ec691211171f2a6f0ff3dd82276c5abbca7e1aac98c0f9daaa965. Mar 3 13:45:45.031588 containerd[1559]: time="2026-03-03T13:45:45.031416866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cbpw,Uid:21f3a3df-63ae-4df2-aac7-995ab3d2e8b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\"" Mar 3 13:45:45.035172 kubelet[2772]: E0303 13:45:45.034226 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:45.042144 containerd[1559]: time="2026-03-03T13:45:45.041970868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gfgt5,Uid:d2230065-4188-487d-9a41-5a02aff0ec57,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e422126429ec691211171f2a6f0ff3dd82276c5abbca7e1aac98c0f9daaa965\"" Mar 3 13:45:45.042955 kubelet[2772]: E0303 13:45:45.042921 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:45.048949 containerd[1559]: time="2026-03-03T13:45:45.048909283Z" level=info msg="CreateContainer within sandbox \"9e422126429ec691211171f2a6f0ff3dd82276c5abbca7e1aac98c0f9daaa965\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 13:45:45.065017 containerd[1559]: time="2026-03-03T13:45:45.064902639Z" level=info msg="Container 32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:45.075561 containerd[1559]: time="2026-03-03T13:45:45.075372390Z" level=info msg="CreateContainer within sandbox \"9e422126429ec691211171f2a6f0ff3dd82276c5abbca7e1aac98c0f9daaa965\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99\"" Mar 3 13:45:45.076145 containerd[1559]: time="2026-03-03T13:45:45.076018946Z" level=info msg="StartContainer for \"32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99\"" Mar 3 13:45:45.077965 containerd[1559]: time="2026-03-03T13:45:45.077873635Z" level=info msg="connecting to shim 32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99" address="unix:///run/containerd/s/dcf87af389bd8dbda52921dabb07e69800ae58ddf420b2f505e42a650753c488" protocol=ttrpc version=3 Mar 3 13:45:45.128463 systemd[1]: Started cri-containerd-32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99.scope - libcontainer container 32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99. Mar 3 13:45:45.246041 containerd[1559]: time="2026-03-03T13:45:45.245005675Z" level=info msg="StartContainer for \"32ddef19dc1d8e113af27a3b8afe4fa4e59d08df5f9da257117d637422ff0c99\" returns successfully" Mar 3 13:45:45.551165 kubelet[2772]: E0303 13:45:45.550712 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:45.633375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230628507.mount: Deactivated successfully. Mar 3 13:45:46.795692 containerd[1559]: time="2026-03-03T13:45:46.795471113Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:46.798350 containerd[1559]: time="2026-03-03T13:45:46.798309135Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 3 13:45:46.799627 containerd[1559]: time="2026-03-03T13:45:46.799576182Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:46.801884 containerd[1559]: time="2026-03-03T13:45:46.801810263Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.899050037s" Mar 3 13:45:46.801884 containerd[1559]: time="2026-03-03T13:45:46.801876667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 3 13:45:46.803566 containerd[1559]: time="2026-03-03T13:45:46.803545590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 3 13:45:46.807635 containerd[1559]: time="2026-03-03T13:45:46.807492174Z" level=info msg="CreateContainer within sandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 3 13:45:46.819836 containerd[1559]: time="2026-03-03T13:45:46.819745911Z" level=info msg="Container acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:46.829844 containerd[1559]: time="2026-03-03T13:45:46.829771902Z" level=info msg="CreateContainer within sandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\"" Mar 3 13:45:46.830653 containerd[1559]: time="2026-03-03T13:45:46.830577298Z" level=info msg="StartContainer for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\"" Mar 3 13:45:46.831945 containerd[1559]: time="2026-03-03T13:45:46.831847982Z" level=info msg="connecting to shim acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e" address="unix:///run/containerd/s/524135db95768ba58f4b4557b0960a202caed503fc13ae4bf42e28b6f95096c4" protocol=ttrpc version=3 Mar 3 13:45:46.892366 systemd[1]: Started cri-containerd-acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e.scope - libcontainer container acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e. Mar 3 13:45:46.948503 containerd[1559]: time="2026-03-03T13:45:46.948396190Z" level=info msg="StartContainer for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" returns successfully" Mar 3 13:45:47.567726 kubelet[2772]: E0303 13:45:47.567650 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:47.601320 kubelet[2772]: I0303 13:45:47.601202 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gfgt5" podStartSLOduration=4.601182968 podStartE2EDuration="4.601182968s" podCreationTimestamp="2026-03-03 13:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:45:45.580714583 +0000 UTC m=+7.260230189" watchObservedRunningTime="2026-03-03 13:45:47.601182968 +0000 UTC m=+9.280698575" Mar 3 13:45:48.331727 kubelet[2772]: E0303 13:45:48.331375 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:48.350596 kubelet[2772]: I0303 13:45:48.350446 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qbmdk" podStartSLOduration=2.443129896 podStartE2EDuration="4.350426454s" podCreationTimestamp="2026-03-03 13:45:44 +0000 UTC" firstStartedPulling="2026-03-03 13:45:44.895757713 +0000 UTC m=+6.575273319" lastFinishedPulling="2026-03-03 13:45:46.803054272 +0000 UTC m=+8.482569877" observedRunningTime="2026-03-03 13:45:47.60293779 +0000 UTC m=+9.282453406" watchObservedRunningTime="2026-03-03 13:45:48.350426454 +0000 UTC m=+10.029942060" Mar 3 13:45:48.625743 kubelet[2772]: E0303 13:45:48.618697 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:48.677601 kubelet[2772]: E0303 13:45:48.675925 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:49.069899 kubelet[2772]: E0303 13:45:49.069065 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:49.640540 kubelet[2772]: E0303 13:45:49.639976 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:51.268489 kubelet[2772]: E0303 13:45:51.267440 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:55.589716 kubelet[2772]: E0303 13:45:55.586655 2772 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.014s" Mar 3 13:46:01.223275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300707203.mount: Deactivated successfully. Mar 3 13:46:04.551754 containerd[1559]: time="2026-03-03T13:46:04.551454807Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:04.553173 containerd[1559]: time="2026-03-03T13:46:04.552978620Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 3 13:46:04.554916 containerd[1559]: time="2026-03-03T13:46:04.554852400Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:04.557661 containerd[1559]: time="2026-03-03T13:46:04.557521172Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.753855757s" Mar 3 13:46:04.557661 containerd[1559]: time="2026-03-03T13:46:04.557582526Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 3 13:46:04.565490 containerd[1559]: time="2026-03-03T13:46:04.565361469Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 13:46:04.579702 containerd[1559]: time="2026-03-03T13:46:04.579586449Z" level=info msg="Container 75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:04.590428 containerd[1559]: time="2026-03-03T13:46:04.590327017Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\"" Mar 3 13:46:04.593685 containerd[1559]: time="2026-03-03T13:46:04.593566487Z" level=info msg="StartContainer for \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\"" Mar 3 13:46:04.595377 containerd[1559]: time="2026-03-03T13:46:04.595214743Z" level=info msg="connecting to shim 75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92" address="unix:///run/containerd/s/afe4ca79efed817fe29eb6d81fd4bcb22877f11df1f86732a430c1fa7300154f" protocol=ttrpc version=3 Mar 3 13:46:04.664388 systemd[1]: Started cri-containerd-75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92.scope - libcontainer container 75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92. Mar 3 13:46:04.750393 containerd[1559]: time="2026-03-03T13:46:04.750270353Z" level=info msg="StartContainer for \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\" returns successfully" Mar 3 13:46:04.780380 systemd[1]: cri-containerd-75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92.scope: Deactivated successfully. Mar 3 13:46:04.928976 containerd[1559]: time="2026-03-03T13:46:04.928716893Z" level=info msg="received container exit event container_id:\"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\" id:\"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\" pid:3255 exited_at:{seconds:1772545564 nanos:786415928}" Mar 3 13:46:04.963809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92-rootfs.mount: Deactivated successfully. Mar 3 13:46:05.723151 kubelet[2772]: E0303 13:46:05.722973 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:05.735554 containerd[1559]: time="2026-03-03T13:46:05.735229991Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 13:46:05.782505 containerd[1559]: time="2026-03-03T13:46:05.782349472Z" level=info msg="Container 33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:05.800978 containerd[1559]: time="2026-03-03T13:46:05.800814662Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\"" Mar 3 13:46:05.806459 containerd[1559]: time="2026-03-03T13:46:05.806428507Z" level=info msg="StartContainer for \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\"" Mar 3 13:46:05.808377 containerd[1559]: time="2026-03-03T13:46:05.808296273Z" level=info msg="connecting to shim 33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17" address="unix:///run/containerd/s/afe4ca79efed817fe29eb6d81fd4bcb22877f11df1f86732a430c1fa7300154f" protocol=ttrpc version=3 Mar 3 13:46:05.846388 systemd[1]: Started cri-containerd-33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17.scope - libcontainer container 33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17. Mar 3 13:46:05.926872 containerd[1559]: time="2026-03-03T13:46:05.926246086Z" level=info msg="StartContainer for \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\" returns successfully" Mar 3 13:46:05.957344 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:46:05.958148 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:46:05.958734 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:46:05.961768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:46:05.964324 containerd[1559]: time="2026-03-03T13:46:05.964172136Z" level=info msg="received container exit event container_id:\"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\" id:\"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\" pid:3300 exited_at:{seconds:1772545565 nanos:963546511}" Mar 3 13:46:05.966035 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:46:05.967816 systemd[1]: cri-containerd-33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17.scope: Deactivated successfully. Mar 3 13:46:06.018765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:46:06.728300 kubelet[2772]: E0303 13:46:06.728164 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:06.735110 containerd[1559]: time="2026-03-03T13:46:06.734979471Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 13:46:06.755387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17-rootfs.mount: Deactivated successfully. Mar 3 13:46:06.765151 containerd[1559]: time="2026-03-03T13:46:06.763891461Z" level=info msg="Container f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:06.778530 containerd[1559]: time="2026-03-03T13:46:06.778392453Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\"" Mar 3 13:46:06.779546 containerd[1559]: time="2026-03-03T13:46:06.779344948Z" level=info msg="StartContainer for \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\"" Mar 3 13:46:06.781967 containerd[1559]: time="2026-03-03T13:46:06.781856601Z" level=info msg="connecting to shim f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173" address="unix:///run/containerd/s/afe4ca79efed817fe29eb6d81fd4bcb22877f11df1f86732a430c1fa7300154f" protocol=ttrpc version=3 Mar 3 13:46:06.832549 systemd[1]: Started cri-containerd-f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173.scope - libcontainer container f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173. Mar 3 13:46:06.949233 containerd[1559]: time="2026-03-03T13:46:06.949162083Z" level=info msg="StartContainer for \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\" returns successfully" Mar 3 13:46:06.950228 systemd[1]: cri-containerd-f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173.scope: Deactivated successfully. Mar 3 13:46:06.954956 containerd[1559]: time="2026-03-03T13:46:06.954757706Z" level=info msg="received container exit event container_id:\"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\" id:\"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\" pid:3348 exited_at:{seconds:1772545566 nanos:954266559}" Mar 3 13:46:07.018264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173-rootfs.mount: Deactivated successfully. Mar 3 13:46:07.734273 kubelet[2772]: E0303 13:46:07.734236 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:07.742464 containerd[1559]: time="2026-03-03T13:46:07.742354389Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 13:46:07.757431 containerd[1559]: time="2026-03-03T13:46:07.757359918Z" level=info msg="Container 82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:07.771061 containerd[1559]: time="2026-03-03T13:46:07.770962202Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\"" Mar 3 13:46:07.771761 containerd[1559]: time="2026-03-03T13:46:07.771700808Z" level=info msg="StartContainer for \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\"" Mar 3 13:46:07.773526 containerd[1559]: time="2026-03-03T13:46:07.773342241Z" level=info msg="connecting to shim 82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658" address="unix:///run/containerd/s/afe4ca79efed817fe29eb6d81fd4bcb22877f11df1f86732a430c1fa7300154f" protocol=ttrpc version=3 Mar 3 13:46:07.809386 systemd[1]: Started cri-containerd-82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658.scope - libcontainer container 82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658. Mar 3 13:46:07.862986 systemd[1]: cri-containerd-82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658.scope: Deactivated successfully. Mar 3 13:46:07.869200 containerd[1559]: time="2026-03-03T13:46:07.869026604Z" level=info msg="received container exit event container_id:\"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\" id:\"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\" pid:3387 exited_at:{seconds:1772545567 nanos:866605400}" Mar 3 13:46:07.871682 containerd[1559]: time="2026-03-03T13:46:07.871618782Z" level=info msg="StartContainer for \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\" returns successfully" Mar 3 13:46:07.913667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658-rootfs.mount: Deactivated successfully. Mar 3 13:46:08.742858 kubelet[2772]: E0303 13:46:08.742766 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:08.748884 containerd[1559]: time="2026-03-03T13:46:08.748842302Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 13:46:08.769682 containerd[1559]: time="2026-03-03T13:46:08.766491498Z" level=info msg="Container 2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:08.780594 containerd[1559]: time="2026-03-03T13:46:08.780472160Z" level=info msg="CreateContainer within sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\"" Mar 3 13:46:08.781493 containerd[1559]: time="2026-03-03T13:46:08.781370070Z" level=info msg="StartContainer for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\"" Mar 3 13:46:08.784032 containerd[1559]: time="2026-03-03T13:46:08.783919583Z" level=info msg="connecting to shim 2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe" address="unix:///run/containerd/s/afe4ca79efed817fe29eb6d81fd4bcb22877f11df1f86732a430c1fa7300154f" protocol=ttrpc version=3 Mar 3 13:46:08.817345 systemd[1]: Started cri-containerd-2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe.scope - libcontainer container 2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe. Mar 3 13:46:08.891780 containerd[1559]: time="2026-03-03T13:46:08.891624277Z" level=info msg="StartContainer for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" returns successfully" Mar 3 13:46:09.113222 kubelet[2772]: I0303 13:46:09.112601 2772 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 3 13:46:09.176323 systemd[1]: Created slice kubepods-burstable-pod03501375_7e2d_47dd_b85f_10e2e2305971.slice - libcontainer container kubepods-burstable-pod03501375_7e2d_47dd_b85f_10e2e2305971.slice. Mar 3 13:46:09.189905 systemd[1]: Created slice kubepods-burstable-podb95e3323_a88e_4239_ae6f_4f59538641e6.slice - libcontainer container kubepods-burstable-podb95e3323_a88e_4239_ae6f_4f59538641e6.slice. Mar 3 13:46:09.226659 kubelet[2772]: I0303 13:46:09.226478 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrqd\" (UniqueName: \"kubernetes.io/projected/b95e3323-a88e-4239-ae6f-4f59538641e6-kube-api-access-zsrqd\") pod \"coredns-674b8bbfcf-d2s4g\" (UID: \"b95e3323-a88e-4239-ae6f-4f59538641e6\") " pod="kube-system/coredns-674b8bbfcf-d2s4g" Mar 3 13:46:09.226659 kubelet[2772]: I0303 13:46:09.226649 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03501375-7e2d-47dd-b85f-10e2e2305971-config-volume\") pod \"coredns-674b8bbfcf-h8j59\" (UID: \"03501375-7e2d-47dd-b85f-10e2e2305971\") " pod="kube-system/coredns-674b8bbfcf-h8j59" Mar 3 13:46:09.226930 kubelet[2772]: I0303 13:46:09.226735 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9m7\" (UniqueName: \"kubernetes.io/projected/03501375-7e2d-47dd-b85f-10e2e2305971-kube-api-access-wg9m7\") pod \"coredns-674b8bbfcf-h8j59\" (UID: \"03501375-7e2d-47dd-b85f-10e2e2305971\") " pod="kube-system/coredns-674b8bbfcf-h8j59" Mar 3 13:46:09.226930 kubelet[2772]: I0303 13:46:09.226757 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b95e3323-a88e-4239-ae6f-4f59538641e6-config-volume\") pod \"coredns-674b8bbfcf-d2s4g\" (UID: \"b95e3323-a88e-4239-ae6f-4f59538641e6\") " pod="kube-system/coredns-674b8bbfcf-d2s4g" Mar 3 13:46:09.485694 kubelet[2772]: E0303 13:46:09.485586 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:09.486788 containerd[1559]: time="2026-03-03T13:46:09.486713422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h8j59,Uid:03501375-7e2d-47dd-b85f-10e2e2305971,Namespace:kube-system,Attempt:0,}" Mar 3 13:46:09.495621 kubelet[2772]: E0303 13:46:09.495587 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:09.497466 containerd[1559]: time="2026-03-03T13:46:09.496985583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d2s4g,Uid:b95e3323-a88e-4239-ae6f-4f59538641e6,Namespace:kube-system,Attempt:0,}" Mar 3 13:46:09.752601 kubelet[2772]: E0303 13:46:09.752400 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:09.783654 kubelet[2772]: I0303 13:46:09.783327 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2cbpw" podStartSLOduration=7.260156829 podStartE2EDuration="26.783305822s" podCreationTimestamp="2026-03-03 13:45:43 +0000 UTC" firstStartedPulling="2026-03-03 13:45:45.035911207 +0000 UTC m=+6.715426823" lastFinishedPulling="2026-03-03 13:46:04.55906021 +0000 UTC m=+26.238575816" observedRunningTime="2026-03-03 13:46:09.782344578 +0000 UTC m=+31.461860185" watchObservedRunningTime="2026-03-03 13:46:09.783305822 +0000 UTC m=+31.462821428" Mar 3 13:46:10.755261 kubelet[2772]: E0303 13:46:10.755017 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:11.418374 systemd-networkd[1464]: cilium_host: Link UP Mar 3 13:46:11.418736 systemd-networkd[1464]: cilium_net: Link UP Mar 3 13:46:11.419036 systemd-networkd[1464]: cilium_host: Gained carrier Mar 3 13:46:11.419402 systemd-networkd[1464]: cilium_net: Gained carrier Mar 3 13:46:11.430704 systemd-networkd[1464]: cilium_host: Gained IPv6LL Mar 3 13:46:11.577226 systemd-networkd[1464]: cilium_vxlan: Link UP Mar 3 13:46:11.577240 systemd-networkd[1464]: cilium_vxlan: Gained carrier Mar 3 13:46:11.757772 kubelet[2772]: E0303 13:46:11.757668 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:11.790470 systemd-networkd[1464]: cilium_net: Gained IPv6LL Mar 3 13:46:11.931216 kernel: NET: Registered PF_ALG protocol family Mar 3 13:46:12.760984 kubelet[2772]: E0303 13:46:12.760902 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:12.875786 systemd-networkd[1464]: lxc_health: Link UP Mar 3 13:46:12.879294 systemd-networkd[1464]: lxc_health: Gained carrier Mar 3 13:46:13.045879 systemd-networkd[1464]: lxca016dcca71df: Link UP Mar 3 13:46:13.049946 kernel: eth0: renamed from tmp9a934 Mar 3 13:46:13.056311 systemd-networkd[1464]: lxca016dcca71df: Gained carrier Mar 3 13:46:13.078299 kernel: eth0: renamed from tmp9e031 Mar 3 13:46:13.076595 systemd-networkd[1464]: lxc94e2f82b5f5c: Link UP Mar 3 13:46:13.080811 systemd-networkd[1464]: lxc94e2f82b5f5c: Gained carrier Mar 3 13:46:13.342741 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL Mar 3 13:46:13.767135 kubelet[2772]: E0303 13:46:13.766222 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:14.430456 systemd-networkd[1464]: lxca016dcca71df: Gained IPv6LL Mar 3 13:46:14.686471 systemd-networkd[1464]: lxc_health: Gained IPv6LL Mar 3 13:46:14.769765 kubelet[2772]: E0303 13:46:14.769644 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:14.814444 systemd-networkd[1464]: lxc94e2f82b5f5c: Gained IPv6LL Mar 3 13:46:17.086221 containerd[1559]: time="2026-03-03T13:46:17.085671510Z" level=info msg="connecting to shim 9a934cd2ac01ebdd72b773b1513327d459716c31730e8ee064e84e8952199d6d" address="unix:///run/containerd/s/a633bb725e7903b2743d6c2a4252c2bfb58649d0b2ef8dd12258117332c0160b" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:17.086221 containerd[1559]: time="2026-03-03T13:46:17.085783927Z" level=info msg="connecting to shim 9e0315327b4904c517c561dac4f0f9b64d8bfbfae4d3f0f3c3fd106320a6700c" address="unix:///run/containerd/s/74cd142722dc958f2d94e3e4fe4a766c09b5456d470590edc667ede97d1f336a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:17.132298 systemd[1]: Started cri-containerd-9a934cd2ac01ebdd72b773b1513327d459716c31730e8ee064e84e8952199d6d.scope - libcontainer container 9a934cd2ac01ebdd72b773b1513327d459716c31730e8ee064e84e8952199d6d. Mar 3 13:46:17.134195 systemd[1]: Started cri-containerd-9e0315327b4904c517c561dac4f0f9b64d8bfbfae4d3f0f3c3fd106320a6700c.scope - libcontainer container 9e0315327b4904c517c561dac4f0f9b64d8bfbfae4d3f0f3c3fd106320a6700c. Mar 3 13:46:17.152926 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:17.155868 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:17.209209 containerd[1559]: time="2026-03-03T13:46:17.209166058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h8j59,Uid:03501375-7e2d-47dd-b85f-10e2e2305971,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e0315327b4904c517c561dac4f0f9b64d8bfbfae4d3f0f3c3fd106320a6700c\"" Mar 3 13:46:17.209909 containerd[1559]: time="2026-03-03T13:46:17.209848353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d2s4g,Uid:b95e3323-a88e-4239-ae6f-4f59538641e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a934cd2ac01ebdd72b773b1513327d459716c31730e8ee064e84e8952199d6d\"" Mar 3 13:46:17.212624 kubelet[2772]: E0303 13:46:17.212485 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:17.212933 kubelet[2772]: E0303 13:46:17.212697 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:17.218804 containerd[1559]: time="2026-03-03T13:46:17.218763042Z" level=info msg="CreateContainer within sandbox \"9e0315327b4904c517c561dac4f0f9b64d8bfbfae4d3f0f3c3fd106320a6700c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:46:17.221972 containerd[1559]: time="2026-03-03T13:46:17.221892670Z" level=info msg="CreateContainer within sandbox \"9a934cd2ac01ebdd72b773b1513327d459716c31730e8ee064e84e8952199d6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:46:17.238891 containerd[1559]: time="2026-03-03T13:46:17.238850879Z" level=info msg="Container ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:17.240269 containerd[1559]: time="2026-03-03T13:46:17.240213849Z" level=info msg="Container 95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:17.247703 containerd[1559]: time="2026-03-03T13:46:17.247610297Z" level=info msg="CreateContainer within sandbox \"9e0315327b4904c517c561dac4f0f9b64d8bfbfae4d3f0f3c3fd106320a6700c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed\"" Mar 3 13:46:17.248543 containerd[1559]: time="2026-03-03T13:46:17.248391740Z" level=info msg="StartContainer for \"ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed\"" Mar 3 13:46:17.251385 containerd[1559]: time="2026-03-03T13:46:17.251351454Z" level=info msg="CreateContainer within sandbox \"9a934cd2ac01ebdd72b773b1513327d459716c31730e8ee064e84e8952199d6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26\"" Mar 3 13:46:17.253126 containerd[1559]: time="2026-03-03T13:46:17.252022199Z" level=info msg="StartContainer for \"95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26\"" Mar 3 13:46:17.253203 containerd[1559]: time="2026-03-03T13:46:17.253071562Z" level=info msg="connecting to shim 95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26" address="unix:///run/containerd/s/a633bb725e7903b2743d6c2a4252c2bfb58649d0b2ef8dd12258117332c0160b" protocol=ttrpc version=3 Mar 3 13:46:17.260272 containerd[1559]: time="2026-03-03T13:46:17.260149554Z" level=info msg="connecting to shim ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed" address="unix:///run/containerd/s/74cd142722dc958f2d94e3e4fe4a766c09b5456d470590edc667ede97d1f336a" protocol=ttrpc version=3 Mar 3 13:46:17.280259 systemd[1]: Started cri-containerd-95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26.scope - libcontainer container 95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26. Mar 3 13:46:17.285665 systemd[1]: Started cri-containerd-ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed.scope - libcontainer container ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed. Mar 3 13:46:17.336818 containerd[1559]: time="2026-03-03T13:46:17.336443737Z" level=info msg="StartContainer for \"95ee1c37aa7ad4a9566d67ab3f781d7399b42a34d72e5bf6986708b62a854b26\" returns successfully" Mar 3 13:46:17.348389 containerd[1559]: time="2026-03-03T13:46:17.348294989Z" level=info msg="StartContainer for \"ef33a377f0154f40ff5f99c9ed3235f8951cfb1e1e4333bec4984fe21a0a31ed\" returns successfully" Mar 3 13:46:17.782185 kubelet[2772]: E0303 13:46:17.782139 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:17.784887 kubelet[2772]: E0303 13:46:17.784760 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:17.811238 kubelet[2772]: I0303 13:46:17.811136 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h8j59" podStartSLOduration=33.811068786 podStartE2EDuration="33.811068786s" podCreationTimestamp="2026-03-03 13:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:46:17.798435939 +0000 UTC m=+39.477951544" watchObservedRunningTime="2026-03-03 13:46:17.811068786 +0000 UTC m=+39.490584392" Mar 3 13:46:18.788029 kubelet[2772]: E0303 13:46:18.787883 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:18.788702 kubelet[2772]: E0303 13:46:18.788059 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:19.789670 kubelet[2772]: E0303 13:46:19.789559 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:19.790161 kubelet[2772]: E0303 13:46:19.789695 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:32.410620 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:45630.service - OpenSSH per-connection server daemon (10.0.0.1:45630). Mar 3 13:46:32.485956 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 45630 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:32.488256 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:32.494653 systemd-logind[1542]: New session 10 of user core. Mar 3 13:46:32.501296 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 13:46:32.676309 sshd[4108]: Connection closed by 10.0.0.1 port 45630 Mar 3 13:46:32.676840 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:32.682264 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:45630.service: Deactivated successfully. Mar 3 13:46:32.684555 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 13:46:32.685733 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Mar 3 13:46:32.687495 systemd-logind[1542]: Removed session 10. Mar 3 13:46:37.698513 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:45638.service - OpenSSH per-connection server daemon (10.0.0.1:45638). Mar 3 13:46:37.767769 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 45638 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:37.769834 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:37.777054 systemd-logind[1542]: New session 11 of user core. Mar 3 13:46:37.785358 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 13:46:37.877250 sshd[4128]: Connection closed by 10.0.0.1 port 45638 Mar 3 13:46:37.877600 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:37.882637 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:45638.service: Deactivated successfully. Mar 3 13:46:37.885165 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 13:46:37.886345 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Mar 3 13:46:37.888012 systemd-logind[1542]: Removed session 11. Mar 3 13:46:42.890295 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:39476.service - OpenSSH per-connection server daemon (10.0.0.1:39476). Mar 3 13:46:42.953682 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 39476 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:42.955642 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:42.961870 systemd-logind[1542]: New session 12 of user core. Mar 3 13:46:42.977378 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 13:46:43.065139 sshd[4147]: Connection closed by 10.0.0.1 port 39476 Mar 3 13:46:43.065468 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:43.069820 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:39476.service: Deactivated successfully. Mar 3 13:46:43.071878 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 13:46:43.073235 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Mar 3 13:46:43.074836 systemd-logind[1542]: Removed session 12. Mar 3 13:46:48.090235 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:39490.service - OpenSSH per-connection server daemon (10.0.0.1:39490). Mar 3 13:46:48.157691 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 39490 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:48.159449 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:48.166068 systemd-logind[1542]: New session 13 of user core. Mar 3 13:46:48.181449 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 13:46:48.287370 sshd[4166]: Connection closed by 10.0.0.1 port 39490 Mar 3 13:46:48.287851 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:48.293378 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:39490.service: Deactivated successfully. Mar 3 13:46:48.296295 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 13:46:48.298356 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Mar 3 13:46:48.300837 systemd-logind[1542]: Removed session 13. Mar 3 13:46:53.308343 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:55946.service - OpenSSH per-connection server daemon (10.0.0.1:55946). Mar 3 13:46:53.379333 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 55946 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:53.381736 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:53.388873 systemd-logind[1542]: New session 14 of user core. Mar 3 13:46:53.399413 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 13:46:53.490382 sshd[4183]: Connection closed by 10.0.0.1 port 55946 Mar 3 13:46:53.490943 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:53.491683 kubelet[2772]: E0303 13:46:53.491563 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:53.495962 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:55946.service: Deactivated successfully. Mar 3 13:46:53.498360 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 13:46:53.499716 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Mar 3 13:46:53.501503 systemd-logind[1542]: Removed session 14. Mar 3 13:46:54.492548 kubelet[2772]: E0303 13:46:54.491958 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:56.491639 kubelet[2772]: E0303 13:46:56.491524 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:58.512283 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:55952.service - OpenSSH per-connection server daemon (10.0.0.1:55952). Mar 3 13:46:58.590832 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 55952 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:58.593532 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:58.610822 systemd-logind[1542]: New session 15 of user core. Mar 3 13:46:58.623319 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 13:46:58.712580 sshd[4200]: Connection closed by 10.0.0.1 port 55952 Mar 3 13:46:58.713190 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:58.726788 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:55952.service: Deactivated successfully. Mar 3 13:46:58.730427 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 13:46:58.732218 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Mar 3 13:46:58.737065 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:55966.service - OpenSSH per-connection server daemon (10.0.0.1:55966). Mar 3 13:46:58.738053 systemd-logind[1542]: Removed session 15. Mar 3 13:46:58.811251 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 55966 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:58.813276 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:58.819997 systemd-logind[1542]: New session 16 of user core. Mar 3 13:46:58.830481 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 13:46:58.975761 sshd[4217]: Connection closed by 10.0.0.1 port 55966 Mar 3 13:46:58.977280 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:58.987039 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:55966.service: Deactivated successfully. Mar 3 13:46:58.990608 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 13:46:58.993585 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Mar 3 13:46:58.999598 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:55982.service - OpenSSH per-connection server daemon (10.0.0.1:55982). Mar 3 13:46:59.008264 systemd-logind[1542]: Removed session 16. Mar 3 13:46:59.095231 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 55982 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:46:59.096200 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:59.109804 systemd-logind[1542]: New session 17 of user core. Mar 3 13:46:59.124327 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 13:46:59.218404 sshd[4231]: Connection closed by 10.0.0.1 port 55982 Mar 3 13:46:59.218916 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:59.224227 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:55982.service: Deactivated successfully. Mar 3 13:46:59.227293 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 13:46:59.228778 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Mar 3 13:46:59.230924 systemd-logind[1542]: Removed session 17. Mar 3 13:47:04.235794 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:54182.service - OpenSSH per-connection server daemon (10.0.0.1:54182). Mar 3 13:47:04.306977 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 54182 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:04.308832 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:04.315949 systemd-logind[1542]: New session 18 of user core. Mar 3 13:47:04.323311 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 13:47:04.406498 sshd[4249]: Connection closed by 10.0.0.1 port 54182 Mar 3 13:47:04.406865 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:04.410600 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:54182.service: Deactivated successfully. Mar 3 13:47:04.412815 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 13:47:04.415207 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Mar 3 13:47:04.416768 systemd-logind[1542]: Removed session 18. Mar 3 13:47:09.424868 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:54190.service - OpenSSH per-connection server daemon (10.0.0.1:54190). Mar 3 13:47:09.492610 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 54190 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:09.494529 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:09.503060 systemd-logind[1542]: New session 19 of user core. Mar 3 13:47:09.515449 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 13:47:09.611996 sshd[4266]: Connection closed by 10.0.0.1 port 54190 Mar 3 13:47:09.612493 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:09.617435 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:54190.service: Deactivated successfully. Mar 3 13:47:09.620026 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 13:47:09.621735 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Mar 3 13:47:09.624160 systemd-logind[1542]: Removed session 19. Mar 3 13:47:14.630996 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:41770.service - OpenSSH per-connection server daemon (10.0.0.1:41770). Mar 3 13:47:14.702408 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 41770 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:14.704052 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:14.710283 systemd-logind[1542]: New session 20 of user core. Mar 3 13:47:14.722388 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 13:47:14.806727 sshd[4282]: Connection closed by 10.0.0.1 port 41770 Mar 3 13:47:14.807362 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:14.823803 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:41770.service: Deactivated successfully. Mar 3 13:47:14.825822 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 13:47:14.827215 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Mar 3 13:47:14.830032 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:41780.service - OpenSSH per-connection server daemon (10.0.0.1:41780). Mar 3 13:47:14.831979 systemd-logind[1542]: Removed session 20. Mar 3 13:47:14.899791 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 41780 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:14.901480 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:14.908299 systemd-logind[1542]: New session 21 of user core. Mar 3 13:47:14.923313 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 13:47:15.179411 sshd[4299]: Connection closed by 10.0.0.1 port 41780 Mar 3 13:47:15.179493 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:15.190784 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:41780.service: Deactivated successfully. Mar 3 13:47:15.192918 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 13:47:15.194242 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Mar 3 13:47:15.196896 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:41792.service - OpenSSH per-connection server daemon (10.0.0.1:41792). Mar 3 13:47:15.198867 systemd-logind[1542]: Removed session 21. Mar 3 13:47:15.265740 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 41792 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:15.267541 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:15.276583 systemd-logind[1542]: New session 22 of user core. Mar 3 13:47:15.289314 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 13:47:15.953293 sshd[4314]: Connection closed by 10.0.0.1 port 41792 Mar 3 13:47:15.955575 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:15.966488 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:41792.service: Deactivated successfully. Mar 3 13:47:15.973023 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 13:47:15.977486 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Mar 3 13:47:15.984634 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:41798.service - OpenSSH per-connection server daemon (10.0.0.1:41798). Mar 3 13:47:15.988587 systemd-logind[1542]: Removed session 22. Mar 3 13:47:16.052820 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 41798 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:16.054731 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:16.062514 systemd-logind[1542]: New session 23 of user core. Mar 3 13:47:16.076346 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 13:47:16.337343 sshd[4338]: Connection closed by 10.0.0.1 port 41798 Mar 3 13:47:16.340055 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:16.354290 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:41798.service: Deactivated successfully. Mar 3 13:47:16.358711 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 13:47:16.363859 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Mar 3 13:47:16.369170 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:41810.service - OpenSSH per-connection server daemon (10.0.0.1:41810). Mar 3 13:47:16.371777 systemd-logind[1542]: Removed session 23. Mar 3 13:47:16.434629 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 41810 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:16.436597 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:16.444433 systemd-logind[1542]: New session 24 of user core. Mar 3 13:47:16.458399 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 13:47:16.493377 kubelet[2772]: E0303 13:47:16.493231 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:16.560581 sshd[4354]: Connection closed by 10.0.0.1 port 41810 Mar 3 13:47:16.561272 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:16.567586 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:41810.service: Deactivated successfully. Mar 3 13:47:16.570792 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 13:47:16.572416 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Mar 3 13:47:16.574882 systemd-logind[1542]: Removed session 24. Mar 3 13:47:17.495218 kubelet[2772]: E0303 13:47:17.493363 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:21.575252 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:53622.service - OpenSSH per-connection server daemon (10.0.0.1:53622). Mar 3 13:47:21.643037 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 53622 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:21.645553 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:21.652681 systemd-logind[1542]: New session 25 of user core. Mar 3 13:47:21.663300 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 13:47:21.752925 sshd[4370]: Connection closed by 10.0.0.1 port 53622 Mar 3 13:47:21.753553 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:21.759535 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:53622.service: Deactivated successfully. Mar 3 13:47:21.762796 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 13:47:21.766259 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Mar 3 13:47:21.768593 systemd-logind[1542]: Removed session 25. Mar 3 13:47:22.492661 kubelet[2772]: E0303 13:47:22.492552 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:26.772297 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:53636.service - OpenSSH per-connection server daemon (10.0.0.1:53636). Mar 3 13:47:26.841842 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:26.843581 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:26.849968 systemd-logind[1542]: New session 26 of user core. Mar 3 13:47:26.861268 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 13:47:26.944698 sshd[4389]: Connection closed by 10.0.0.1 port 53636 Mar 3 13:47:26.945146 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:26.949701 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:53636.service: Deactivated successfully. Mar 3 13:47:26.951798 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 13:47:26.952905 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Mar 3 13:47:26.954887 systemd-logind[1542]: Removed session 26. Mar 3 13:47:31.964163 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:56658.service - OpenSSH per-connection server daemon (10.0.0.1:56658). Mar 3 13:47:32.035008 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 56658 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:32.037420 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:32.046261 systemd-logind[1542]: New session 27 of user core. Mar 3 13:47:32.056516 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 3 13:47:32.170629 sshd[4405]: Connection closed by 10.0.0.1 port 56658 Mar 3 13:47:32.171335 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:32.184584 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:56658.service: Deactivated successfully. Mar 3 13:47:32.187227 systemd[1]: session-27.scope: Deactivated successfully. Mar 3 13:47:32.188692 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Mar 3 13:47:32.191673 systemd[1]: Started sshd@27-10.0.0.81:22-10.0.0.1:56662.service - OpenSSH per-connection server daemon (10.0.0.1:56662). Mar 3 13:47:32.193967 systemd-logind[1542]: Removed session 27. Mar 3 13:47:32.274023 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 56662 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:32.275948 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:32.283442 systemd-logind[1542]: New session 28 of user core. Mar 3 13:47:32.298656 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 3 13:47:33.668187 kubelet[2772]: I0303 13:47:33.667757 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d2s4g" podStartSLOduration=109.667734514 podStartE2EDuration="1m49.667734514s" podCreationTimestamp="2026-03-03 13:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:46:17.824854129 +0000 UTC m=+39.504369735" watchObservedRunningTime="2026-03-03 13:47:33.667734514 +0000 UTC m=+115.347250120" Mar 3 13:47:33.675183 containerd[1559]: time="2026-03-03T13:47:33.673230271Z" level=info msg="StopContainer for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" with timeout 30 (s)" Mar 3 13:47:33.689969 containerd[1559]: time="2026-03-03T13:47:33.689892866Z" level=info msg="Stop container \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" with signal terminated" Mar 3 13:47:33.713517 systemd[1]: cri-containerd-acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e.scope: Deactivated successfully. Mar 3 13:47:33.713931 systemd[1]: cri-containerd-acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e.scope: Consumed 1.621s CPU time, 26.3M memory peak, 580K read from disk, 4K written to disk. Mar 3 13:47:33.719456 containerd[1559]: time="2026-03-03T13:47:33.719354228Z" level=info msg="received container exit event container_id:\"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" id:\"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" pid:3189 exited_at:{seconds:1772545653 nanos:716856279}" Mar 3 13:47:33.727537 containerd[1559]: time="2026-03-03T13:47:33.727466384Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:47:33.745722 containerd[1559]: time="2026-03-03T13:47:33.745209263Z" level=info msg="StopContainer for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" with timeout 2 (s)" Mar 3 13:47:33.745869 containerd[1559]: time="2026-03-03T13:47:33.745849680Z" level=info msg="Stop container \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" with signal terminated" Mar 3 13:47:33.762264 systemd-networkd[1464]: lxc_health: Link DOWN Mar 3 13:47:33.765494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e-rootfs.mount: Deactivated successfully. Mar 3 13:47:33.765941 systemd-networkd[1464]: lxc_health: Lost carrier Mar 3 13:47:33.789817 containerd[1559]: time="2026-03-03T13:47:33.789730453Z" level=info msg="StopContainer for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" returns successfully" Mar 3 13:47:33.793298 systemd[1]: cri-containerd-2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe.scope: Deactivated successfully. Mar 3 13:47:33.793730 systemd[1]: cri-containerd-2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe.scope: Consumed 8.721s CPU time, 125.7M memory peak, 124K read from disk, 13.3M written to disk. Mar 3 13:47:33.796650 containerd[1559]: time="2026-03-03T13:47:33.796592183Z" level=info msg="received container exit event container_id:\"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" id:\"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" pid:3425 exited_at:{seconds:1772545653 nanos:796217592}" Mar 3 13:47:33.796733 containerd[1559]: time="2026-03-03T13:47:33.796640061Z" level=info msg="StopPodSandbox for \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\"" Mar 3 13:47:33.796775 containerd[1559]: time="2026-03-03T13:47:33.796754796Z" level=info msg="Container to stop \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:47:33.811720 systemd[1]: cri-containerd-4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c.scope: Deactivated successfully. Mar 3 13:47:33.819650 containerd[1559]: time="2026-03-03T13:47:33.819582386Z" level=info msg="received sandbox exit event container_id:\"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" id:\"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" exit_status:137 exited_at:{seconds:1772545653 nanos:818924004}" monitor_name=podsandbox Mar 3 13:47:33.838847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe-rootfs.mount: Deactivated successfully. Mar 3 13:47:33.851943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c-rootfs.mount: Deactivated successfully. Mar 3 13:47:33.854601 containerd[1559]: time="2026-03-03T13:47:33.854550897Z" level=info msg="StopContainer for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" returns successfully" Mar 3 13:47:33.855519 containerd[1559]: time="2026-03-03T13:47:33.855426002Z" level=info msg="StopPodSandbox for \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\"" Mar 3 13:47:33.855519 containerd[1559]: time="2026-03-03T13:47:33.855485343Z" level=info msg="Container to stop \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:47:33.855519 containerd[1559]: time="2026-03-03T13:47:33.855496254Z" level=info msg="Container to stop \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:47:33.855519 containerd[1559]: time="2026-03-03T13:47:33.855504539Z" level=info msg="Container to stop \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:47:33.855519 containerd[1559]: time="2026-03-03T13:47:33.855512684Z" level=info msg="Container to stop \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:47:33.855519 containerd[1559]: time="2026-03-03T13:47:33.855520438Z" level=info msg="Container to stop \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:47:33.857536 containerd[1559]: time="2026-03-03T13:47:33.857305886Z" level=info msg="shim disconnected" id=4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c namespace=k8s.io Mar 3 13:47:33.857536 containerd[1559]: time="2026-03-03T13:47:33.857403748Z" level=warning msg="cleaning up after shim disconnected" id=4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c namespace=k8s.io Mar 3 13:47:33.857536 containerd[1559]: time="2026-03-03T13:47:33.857420369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 13:47:33.866708 systemd[1]: cri-containerd-5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec.scope: Deactivated successfully. Mar 3 13:47:33.869401 containerd[1559]: time="2026-03-03T13:47:33.869370436Z" level=info msg="received sandbox exit event container_id:\"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" id:\"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" exit_status:137 exited_at:{seconds:1772545653 nanos:869145206}" monitor_name=podsandbox Mar 3 13:47:33.885170 containerd[1559]: time="2026-03-03T13:47:33.884346060Z" level=info msg="received sandbox container exit event sandbox_id:\"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" exit_status:137 exited_at:{seconds:1772545653 nanos:818924004}" monitor_name=criService Mar 3 13:47:33.886568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c-shm.mount: Deactivated successfully. Mar 3 13:47:33.891494 containerd[1559]: time="2026-03-03T13:47:33.891398983Z" level=info msg="TearDown network for sandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" successfully" Mar 3 13:47:33.891494 containerd[1559]: time="2026-03-03T13:47:33.891453235Z" level=info msg="StopPodSandbox for \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" returns successfully" Mar 3 13:47:33.907656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec-rootfs.mount: Deactivated successfully. Mar 3 13:47:33.915975 containerd[1559]: time="2026-03-03T13:47:33.915008818Z" level=info msg="shim disconnected" id=5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec namespace=k8s.io Mar 3 13:47:33.915975 containerd[1559]: time="2026-03-03T13:47:33.915385913Z" level=warning msg="cleaning up after shim disconnected" id=5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec namespace=k8s.io Mar 3 13:47:33.915975 containerd[1559]: time="2026-03-03T13:47:33.915398637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 13:47:33.941743 containerd[1559]: time="2026-03-03T13:47:33.941685965Z" level=info msg="received sandbox container exit event sandbox_id:\"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" exit_status:137 exited_at:{seconds:1772545653 nanos:869145206}" monitor_name=criService Mar 3 13:47:33.942247 containerd[1559]: time="2026-03-03T13:47:33.942190518Z" level=info msg="TearDown network for sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" successfully" Mar 3 13:47:33.942247 containerd[1559]: time="2026-03-03T13:47:33.942237405Z" level=info msg="StopPodSandbox for \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" returns successfully" Mar 3 13:47:33.949561 kubelet[2772]: I0303 13:47:33.949534 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-cilium-config-path\") pod \"b606ab69-5bb0-4d0b-a8ea-ea4124d340c6\" (UID: \"b606ab69-5bb0-4d0b-a8ea-ea4124d340c6\") " Mar 3 13:47:33.950249 kubelet[2772]: I0303 13:47:33.950006 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gpx7\" (UniqueName: \"kubernetes.io/projected/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-kube-api-access-8gpx7\") pod \"b606ab69-5bb0-4d0b-a8ea-ea4124d340c6\" (UID: \"b606ab69-5bb0-4d0b-a8ea-ea4124d340c6\") " Mar 3 13:47:33.954640 kubelet[2772]: I0303 13:47:33.954547 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b606ab69-5bb0-4d0b-a8ea-ea4124d340c6" (UID: "b606ab69-5bb0-4d0b-a8ea-ea4124d340c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:47:33.957896 kubelet[2772]: I0303 13:47:33.957832 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-kube-api-access-8gpx7" (OuterVolumeSpecName: "kube-api-access-8gpx7") pod "b606ab69-5bb0-4d0b-a8ea-ea4124d340c6" (UID: "b606ab69-5bb0-4d0b-a8ea-ea4124d340c6"). InnerVolumeSpecName "kube-api-access-8gpx7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:47:33.981858 kubelet[2772]: I0303 13:47:33.981708 2772 scope.go:117] "RemoveContainer" containerID="acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e" Mar 3 13:47:33.985802 containerd[1559]: time="2026-03-03T13:47:33.985602976Z" level=info msg="RemoveContainer for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\"" Mar 3 13:47:34.000401 systemd[1]: Removed slice kubepods-besteffort-podb606ab69_5bb0_4d0b_a8ea_ea4124d340c6.slice - libcontainer container kubepods-besteffort-podb606ab69_5bb0_4d0b_a8ea_ea4124d340c6.slice. Mar 3 13:47:34.000545 systemd[1]: kubepods-besteffort-podb606ab69_5bb0_4d0b_a8ea_ea4124d340c6.slice: Consumed 1.679s CPU time, 26.5M memory peak, 580K read from disk, 4K written to disk. Mar 3 13:47:34.014563 containerd[1559]: time="2026-03-03T13:47:34.014379122Z" level=info msg="RemoveContainer for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" returns successfully" Mar 3 13:47:34.015350 kubelet[2772]: I0303 13:47:34.014842 2772 scope.go:117] "RemoveContainer" containerID="acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e" Mar 3 13:47:34.016576 containerd[1559]: time="2026-03-03T13:47:34.016413984Z" level=error msg="ContainerStatus for \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\": not found" Mar 3 13:47:34.016637 kubelet[2772]: E0303 13:47:34.016597 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\": not found" containerID="acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e" Mar 3 13:47:34.016672 kubelet[2772]: I0303 13:47:34.016627 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e"} err="failed to get container status \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\": rpc error: code = NotFound desc = an error occurred when try to find container \"acc09f3d0d9b7b2a5202b7e02287a810ece6d272d8e54060c0046a4af62f551e\": not found" Mar 3 13:47:34.016672 kubelet[2772]: I0303 13:47:34.016665 2772 scope.go:117] "RemoveContainer" containerID="2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe" Mar 3 13:47:34.016879 kubelet[2772]: E0303 13:47:34.016831 2772 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:47:34.019227 containerd[1559]: time="2026-03-03T13:47:34.019154514Z" level=info msg="RemoveContainer for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\"" Mar 3 13:47:34.025604 containerd[1559]: time="2026-03-03T13:47:34.025575831Z" level=info msg="RemoveContainer for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" returns successfully" Mar 3 13:47:34.025873 kubelet[2772]: I0303 13:47:34.025785 2772 scope.go:117] "RemoveContainer" containerID="82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658" Mar 3 13:47:34.027959 containerd[1559]: time="2026-03-03T13:47:34.027876654Z" level=info msg="RemoveContainer for \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\"" Mar 3 13:47:34.033592 containerd[1559]: time="2026-03-03T13:47:34.033506137Z" level=info msg="RemoveContainer for \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\" returns successfully" Mar 3 13:47:34.033728 kubelet[2772]: I0303 13:47:34.033686 2772 scope.go:117] "RemoveContainer" containerID="f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173" Mar 3 13:47:34.036963 containerd[1559]: time="2026-03-03T13:47:34.036906377Z" level=info msg="RemoveContainer for \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\"" Mar 3 13:47:34.042917 containerd[1559]: time="2026-03-03T13:47:34.042877263Z" level=info msg="RemoveContainer for \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\" returns successfully" Mar 3 13:47:34.043246 kubelet[2772]: I0303 13:47:34.043130 2772 scope.go:117] "RemoveContainer" containerID="33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17" Mar 3 13:47:34.045150 containerd[1559]: time="2026-03-03T13:47:34.044972061Z" level=info msg="RemoveContainer for \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\"" Mar 3 13:47:34.051110 containerd[1559]: time="2026-03-03T13:47:34.051008559Z" level=info msg="RemoveContainer for \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\" returns successfully" Mar 3 13:47:34.051315 kubelet[2772]: I0303 13:47:34.051295 2772 scope.go:117] "RemoveContainer" containerID="75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92" Mar 3 13:47:34.051817 kubelet[2772]: I0303 13:47:34.051739 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-run\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051817 kubelet[2772]: I0303 13:47:34.051794 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk8w\" (UniqueName: \"kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-kube-api-access-rsk8w\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051817 kubelet[2772]: I0303 13:47:34.051811 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hubble-tls\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051981 kubelet[2772]: I0303 13:47:34.051831 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-config-path\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051981 kubelet[2772]: I0303 13:47:34.051845 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hostproc\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051981 kubelet[2772]: I0303 13:47:34.051857 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-lib-modules\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051981 kubelet[2772]: I0303 13:47:34.051871 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-net\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051981 kubelet[2772]: I0303 13:47:34.051884 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-xtables-lock\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.051981 kubelet[2772]: I0303 13:47:34.051898 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cni-path\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.052341 kubelet[2772]: I0303 13:47:34.051912 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-etc-cni-netd\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.052341 kubelet[2772]: I0303 13:47:34.051928 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-clustermesh-secrets\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.052341 kubelet[2772]: I0303 13:47:34.051940 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-bpf-maps\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.052341 kubelet[2772]: I0303 13:47:34.051955 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-cgroup\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.052341 kubelet[2772]: I0303 13:47:34.051973 2772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-kernel\") pod \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\" (UID: \"21f3a3df-63ae-4df2-aac7-995ab3d2e8b1\") " Mar 3 13:47:34.052341 kubelet[2772]: I0303 13:47:34.052006 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gpx7\" (UniqueName: \"kubernetes.io/projected/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-kube-api-access-8gpx7\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.052558 kubelet[2772]: I0303 13:47:34.052016 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.052558 kubelet[2772]: I0303 13:47:34.052137 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052558 kubelet[2772]: I0303 13:47:34.052188 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cni-path" (OuterVolumeSpecName: "cni-path") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052558 kubelet[2772]: I0303 13:47:34.052132 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hostproc" (OuterVolumeSpecName: "hostproc") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052558 kubelet[2772]: I0303 13:47:34.052194 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052748 kubelet[2772]: I0303 13:47:34.052208 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052748 kubelet[2772]: I0303 13:47:34.052227 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052748 kubelet[2772]: I0303 13:47:34.052239 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052748 kubelet[2772]: I0303 13:47:34.052245 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052748 kubelet[2772]: I0303 13:47:34.052256 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.052937 kubelet[2772]: I0303 13:47:34.052259 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:47:34.056518 containerd[1559]: time="2026-03-03T13:47:34.055804715Z" level=info msg="RemoveContainer for \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\"" Mar 3 13:47:34.056736 kubelet[2772]: I0303 13:47:34.056010 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:47:34.057036 kubelet[2772]: I0303 13:47:34.056927 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-kube-api-access-rsk8w" (OuterVolumeSpecName: "kube-api-access-rsk8w") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "kube-api-access-rsk8w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:47:34.058486 kubelet[2772]: I0303 13:47:34.058463 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:47:34.060659 containerd[1559]: time="2026-03-03T13:47:34.060569137Z" level=info msg="RemoveContainer for \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\" returns successfully" Mar 3 13:47:34.060744 kubelet[2772]: I0303 13:47:34.060719 2772 scope.go:117] "RemoveContainer" containerID="2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe" Mar 3 13:47:34.060928 containerd[1559]: time="2026-03-03T13:47:34.060878155Z" level=error msg="ContainerStatus for \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\": not found" Mar 3 13:47:34.061219 kubelet[2772]: E0303 13:47:34.061035 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\": not found" containerID="2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe" Mar 3 13:47:34.061219 kubelet[2772]: I0303 13:47:34.061146 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe"} err="failed to get container status \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"2444e17bcc7cf629f99285fe4848b8be5c9186f8aab5ecc884cabc30a112b1fe\": not found" Mar 3 13:47:34.061219 kubelet[2772]: I0303 13:47:34.061164 2772 scope.go:117] "RemoveContainer" containerID="82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658" Mar 3 13:47:34.061689 kubelet[2772]: I0303 13:47:34.061613 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" (UID: "21f3a3df-63ae-4df2-aac7-995ab3d2e8b1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 13:47:34.061749 containerd[1559]: time="2026-03-03T13:47:34.061602070Z" level=error msg="ContainerStatus for \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\": not found" Mar 3 13:47:34.061957 kubelet[2772]: E0303 13:47:34.061904 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\": not found" containerID="82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658" Mar 3 13:47:34.061993 kubelet[2772]: I0303 13:47:34.061971 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658"} err="failed to get container status \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\": rpc error: code = NotFound desc = an error occurred when try to find container \"82cbde4a9710dfb3e6488b69e9360ede3c12d1c011e1fdc511f734ad859bd658\": not found" Mar 3 13:47:34.062024 kubelet[2772]: I0303 13:47:34.061997 2772 scope.go:117] "RemoveContainer" containerID="f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173" Mar 3 13:47:34.062421 containerd[1559]: time="2026-03-03T13:47:34.062379203Z" level=error msg="ContainerStatus for \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\": not found" Mar 3 13:47:34.062690 kubelet[2772]: E0303 13:47:34.062648 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\": not found" containerID="f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173" Mar 3 13:47:34.062754 kubelet[2772]: I0303 13:47:34.062700 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173"} err="failed to get container status \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9a8e93c48b80a807900dce17f23cffccd21be0364f94ed5d5204373a0724173\": not found" Mar 3 13:47:34.062790 kubelet[2772]: I0303 13:47:34.062756 2772 scope.go:117] "RemoveContainer" containerID="33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17" Mar 3 13:47:34.063158 containerd[1559]: time="2026-03-03T13:47:34.063013333Z" level=error msg="ContainerStatus for \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\": not found" Mar 3 13:47:34.063618 kubelet[2772]: E0303 13:47:34.063501 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\": not found" containerID="33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17" Mar 3 13:47:34.063658 kubelet[2772]: I0303 13:47:34.063623 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17"} err="failed to get container status \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\": rpc error: code = NotFound desc = an error occurred when try to find container \"33628c6b3746063eb81a1bb25724cdcc4ad222f6a4b2bdd379053cf47547cf17\": not found" Mar 3 13:47:34.063658 kubelet[2772]: I0303 13:47:34.063638 2772 scope.go:117] "RemoveContainer" containerID="75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92" Mar 3 13:47:34.063902 containerd[1559]: time="2026-03-03T13:47:34.063838630Z" level=error msg="ContainerStatus for \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\": not found" Mar 3 13:47:34.064250 kubelet[2772]: E0303 13:47:34.064210 2772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\": not found" containerID="75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92" Mar 3 13:47:34.064301 kubelet[2772]: I0303 13:47:34.064250 2772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92"} err="failed to get container status \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\": rpc error: code = NotFound desc = an error occurred when try to find container \"75a56ddae54b7965ce75de3b0567900dcee9845a0a77911aa13bd3bd79908f92\": not found" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153449 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153517 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsk8w\" (UniqueName: \"kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-kube-api-access-rsk8w\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153530 2772 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153540 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153551 2772 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153561 2772 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153570 2772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153578 kubelet[2772]: I0303 13:47:34.153580 2772 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153901 kubelet[2772]: I0303 13:47:34.153589 2772 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153901 kubelet[2772]: I0303 13:47:34.153600 2772 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153901 kubelet[2772]: I0303 13:47:34.153609 2772 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153901 kubelet[2772]: I0303 13:47:34.153618 2772 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153901 kubelet[2772]: I0303 13:47:34.153627 2772 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.153901 kubelet[2772]: I0303 13:47:34.153637 2772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 3 13:47:34.307336 systemd[1]: Removed slice kubepods-burstable-pod21f3a3df_63ae_4df2_aac7_995ab3d2e8b1.slice - libcontainer container kubepods-burstable-pod21f3a3df_63ae_4df2_aac7_995ab3d2e8b1.slice. Mar 3 13:47:34.307540 systemd[1]: kubepods-burstable-pod21f3a3df_63ae_4df2_aac7_995ab3d2e8b1.slice: Consumed 8.945s CPU time, 126.1M memory peak, 128K read from disk, 13.3M written to disk. Mar 3 13:47:34.494677 kubelet[2772]: I0303 13:47:34.494577 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f3a3df-63ae-4df2-aac7-995ab3d2e8b1" path="/var/lib/kubelet/pods/21f3a3df-63ae-4df2-aac7-995ab3d2e8b1/volumes" Mar 3 13:47:34.495815 kubelet[2772]: I0303 13:47:34.495735 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b606ab69-5bb0-4d0b-a8ea-ea4124d340c6" path="/var/lib/kubelet/pods/b606ab69-5bb0-4d0b-a8ea-ea4124d340c6/volumes" Mar 3 13:47:34.761041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec-shm.mount: Deactivated successfully. Mar 3 13:47:34.761286 systemd[1]: var-lib-kubelet-pods-21f3a3df\x2d63ae\x2d4df2\x2daac7\x2d995ab3d2e8b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drsk8w.mount: Deactivated successfully. Mar 3 13:47:34.761373 systemd[1]: var-lib-kubelet-pods-b606ab69\x2d5bb0\x2d4d0b\x2da8ea\x2dea4124d340c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gpx7.mount: Deactivated successfully. Mar 3 13:47:34.761495 systemd[1]: var-lib-kubelet-pods-21f3a3df\x2d63ae\x2d4df2\x2daac7\x2d995ab3d2e8b1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 3 13:47:34.761574 systemd[1]: var-lib-kubelet-pods-21f3a3df\x2d63ae\x2d4df2\x2daac7\x2d995ab3d2e8b1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 3 13:47:35.491583 kubelet[2772]: E0303 13:47:35.491390 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-d2s4g" podUID="b95e3323-a88e-4239-ae6f-4f59538641e6" Mar 3 13:47:35.611218 sshd[4421]: Connection closed by 10.0.0.1 port 56662 Mar 3 13:47:35.612040 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:35.623641 systemd[1]: sshd@27-10.0.0.81:22-10.0.0.1:56662.service: Deactivated successfully. Mar 3 13:47:35.626530 systemd[1]: session-28.scope: Deactivated successfully. Mar 3 13:47:35.629158 systemd-logind[1542]: Session 28 logged out. Waiting for processes to exit. Mar 3 13:47:35.634362 systemd[1]: Started sshd@28-10.0.0.81:22-10.0.0.1:56668.service - OpenSSH per-connection server daemon (10.0.0.1:56668). Mar 3 13:47:35.635524 systemd-logind[1542]: Removed session 28. Mar 3 13:47:35.711186 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 56668 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:35.712702 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:35.718802 systemd-logind[1542]: New session 29 of user core. Mar 3 13:47:35.732361 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 3 13:47:36.317471 sshd[4572]: Connection closed by 10.0.0.1 port 56668 Mar 3 13:47:36.318010 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:36.331485 systemd[1]: sshd@28-10.0.0.81:22-10.0.0.1:56668.service: Deactivated successfully. Mar 3 13:47:36.336524 systemd[1]: session-29.scope: Deactivated successfully. Mar 3 13:47:36.341023 systemd-logind[1542]: Session 29 logged out. Waiting for processes to exit. Mar 3 13:47:36.346532 systemd[1]: Started sshd@29-10.0.0.81:22-10.0.0.1:56672.service - OpenSSH per-connection server daemon (10.0.0.1:56672). Mar 3 13:47:36.352800 systemd-logind[1542]: Removed session 29. Mar 3 13:47:36.379608 systemd[1]: Created slice kubepods-burstable-podcd7bd99d_8f05_4705_b997_9227910e5685.slice - libcontainer container kubepods-burstable-podcd7bd99d_8f05_4705_b997_9227910e5685.slice. Mar 3 13:47:36.421920 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:36.424292 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:36.431019 systemd-logind[1542]: New session 30 of user core. Mar 3 13:47:36.450338 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 3 13:47:36.468519 sshd[4587]: Connection closed by 10.0.0.1 port 56672 Mar 3 13:47:36.469042 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:36.474172 kubelet[2772]: I0303 13:47:36.474009 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-cilium-cgroup\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474293 kubelet[2772]: I0303 13:47:36.474236 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-etc-cni-netd\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474366 kubelet[2772]: I0303 13:47:36.474317 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-xtables-lock\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474430 kubelet[2772]: I0303 13:47:36.474388 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-host-proc-sys-net\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474464 kubelet[2772]: I0303 13:47:36.474450 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv9gd\" (UniqueName: \"kubernetes.io/projected/cd7bd99d-8f05-4705-b997-9227910e5685-kube-api-access-qv9gd\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474612 kubelet[2772]: I0303 13:47:36.474486 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-bpf-maps\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474782 kubelet[2772]: I0303 13:47:36.474642 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cd7bd99d-8f05-4705-b997-9227910e5685-cilium-ipsec-secrets\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474818 kubelet[2772]: I0303 13:47:36.474790 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-cilium-run\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474841 kubelet[2772]: I0303 13:47:36.474818 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-hostproc\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474869 kubelet[2772]: I0303 13:47:36.474847 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-host-proc-sys-kernel\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474895 kubelet[2772]: I0303 13:47:36.474880 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd7bd99d-8f05-4705-b997-9227910e5685-hubble-tls\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474924 kubelet[2772]: I0303 13:47:36.474906 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd7bd99d-8f05-4705-b997-9227910e5685-clustermesh-secrets\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.474947 kubelet[2772]: I0303 13:47:36.474936 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd7bd99d-8f05-4705-b997-9227910e5685-cilium-config-path\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.475015 kubelet[2772]: I0303 13:47:36.474968 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-cni-path\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.475169 kubelet[2772]: I0303 13:47:36.475031 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd7bd99d-8f05-4705-b997-9227910e5685-lib-modules\") pod \"cilium-kvbzj\" (UID: \"cd7bd99d-8f05-4705-b997-9227910e5685\") " pod="kube-system/cilium-kvbzj" Mar 3 13:47:36.481559 systemd[1]: sshd@29-10.0.0.81:22-10.0.0.1:56672.service: Deactivated successfully. Mar 3 13:47:36.484212 systemd[1]: session-30.scope: Deactivated successfully. Mar 3 13:47:36.485471 systemd-logind[1542]: Session 30 logged out. Waiting for processes to exit. Mar 3 13:47:36.488725 systemd[1]: Started sshd@30-10.0.0.81:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). Mar 3 13:47:36.490311 systemd-logind[1542]: Removed session 30. Mar 3 13:47:36.560982 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:47:36.563478 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:36.570615 systemd-logind[1542]: New session 31 of user core. Mar 3 13:47:36.578388 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 3 13:47:36.686682 kubelet[2772]: E0303 13:47:36.686547 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:36.694147 containerd[1559]: time="2026-03-03T13:47:36.693953016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvbzj,Uid:cd7bd99d-8f05-4705-b997-9227910e5685,Namespace:kube-system,Attempt:0,}" Mar 3 13:47:36.715590 containerd[1559]: time="2026-03-03T13:47:36.715489773Z" level=info msg="connecting to shim 34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637" address="unix:///run/containerd/s/fb0f41b5653a753c89148a23201d4c67c376610d3096652008dba84076640639" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:47:36.751350 systemd[1]: Started cri-containerd-34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637.scope - libcontainer container 34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637. Mar 3 13:47:36.791232 containerd[1559]: time="2026-03-03T13:47:36.791186216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvbzj,Uid:cd7bd99d-8f05-4705-b997-9227910e5685,Namespace:kube-system,Attempt:0,} returns sandbox id \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\"" Mar 3 13:47:36.792659 kubelet[2772]: E0303 13:47:36.792538 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:36.803575 containerd[1559]: time="2026-03-03T13:47:36.803494918Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 13:47:36.813865 containerd[1559]: time="2026-03-03T13:47:36.813780249Z" level=info msg="Container a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:47:36.822290 containerd[1559]: time="2026-03-03T13:47:36.821734600Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790\"" Mar 3 13:47:36.823364 containerd[1559]: time="2026-03-03T13:47:36.823327656Z" level=info msg="StartContainer for \"a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790\"" Mar 3 13:47:36.824781 containerd[1559]: time="2026-03-03T13:47:36.824497901Z" level=info msg="connecting to shim a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790" address="unix:///run/containerd/s/fb0f41b5653a753c89148a23201d4c67c376610d3096652008dba84076640639" protocol=ttrpc version=3 Mar 3 13:47:36.866391 systemd[1]: Started cri-containerd-a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790.scope - libcontainer container a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790. Mar 3 13:47:36.919288 containerd[1559]: time="2026-03-03T13:47:36.919176049Z" level=info msg="StartContainer for \"a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790\" returns successfully" Mar 3 13:47:36.932977 systemd[1]: cri-containerd-a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790.scope: Deactivated successfully. Mar 3 13:47:36.935958 containerd[1559]: time="2026-03-03T13:47:36.935888290Z" level=info msg="received container exit event container_id:\"a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790\" id:\"a4c825873cb14a63810501b69fcf5e376373a8c3528f00ef78d099f1512ee790\" pid:4668 exited_at:{seconds:1772545656 nanos:934148201}" Mar 3 13:47:37.006792 kubelet[2772]: E0303 13:47:37.006683 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:37.016864 containerd[1559]: time="2026-03-03T13:47:37.016740829Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 13:47:37.030017 containerd[1559]: time="2026-03-03T13:47:37.029941111Z" level=info msg="Container edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:47:37.038337 containerd[1559]: time="2026-03-03T13:47:37.038256336Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89\"" Mar 3 13:47:37.044889 containerd[1559]: time="2026-03-03T13:47:37.044804640Z" level=info msg="StartContainer for \"edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89\"" Mar 3 13:47:37.045799 containerd[1559]: time="2026-03-03T13:47:37.045759113Z" level=info msg="connecting to shim edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89" address="unix:///run/containerd/s/fb0f41b5653a753c89148a23201d4c67c376610d3096652008dba84076640639" protocol=ttrpc version=3 Mar 3 13:47:37.080401 systemd[1]: Started cri-containerd-edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89.scope - libcontainer container edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89. Mar 3 13:47:37.128849 containerd[1559]: time="2026-03-03T13:47:37.128788956Z" level=info msg="StartContainer for \"edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89\" returns successfully" Mar 3 13:47:37.136658 systemd[1]: cri-containerd-edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89.scope: Deactivated successfully. Mar 3 13:47:37.137749 containerd[1559]: time="2026-03-03T13:47:37.137558917Z" level=info msg="received container exit event container_id:\"edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89\" id:\"edc6d66983175553b53c313d176e926e087f9d0609586bbea2bbbeac26236a89\" pid:4717 exited_at:{seconds:1772545657 nanos:137313208}" Mar 3 13:47:37.491556 kubelet[2772]: E0303 13:47:37.491420 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-d2s4g" podUID="b95e3323-a88e-4239-ae6f-4f59538641e6" Mar 3 13:47:38.012781 kubelet[2772]: E0303 13:47:38.012701 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:38.020692 containerd[1559]: time="2026-03-03T13:47:38.020546378Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 13:47:38.046906 containerd[1559]: time="2026-03-03T13:47:38.046815957Z" level=info msg="Container 1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:47:38.050996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576649796.mount: Deactivated successfully. Mar 3 13:47:38.066026 containerd[1559]: time="2026-03-03T13:47:38.065933489Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d\"" Mar 3 13:47:38.069023 containerd[1559]: time="2026-03-03T13:47:38.068938528Z" level=info msg="StartContainer for \"1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d\"" Mar 3 13:47:38.077903 containerd[1559]: time="2026-03-03T13:47:38.077812467Z" level=info msg="connecting to shim 1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d" address="unix:///run/containerd/s/fb0f41b5653a753c89148a23201d4c67c376610d3096652008dba84076640639" protocol=ttrpc version=3 Mar 3 13:47:38.128411 systemd[1]: Started cri-containerd-1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d.scope - libcontainer container 1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d. Mar 3 13:47:38.253896 systemd[1]: cri-containerd-1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d.scope: Deactivated successfully. Mar 3 13:47:38.255212 containerd[1559]: time="2026-03-03T13:47:38.254638574Z" level=info msg="StartContainer for \"1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d\" returns successfully" Mar 3 13:47:38.256931 containerd[1559]: time="2026-03-03T13:47:38.256811373Z" level=info msg="received container exit event container_id:\"1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d\" id:\"1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d\" pid:4761 exited_at:{seconds:1772545658 nanos:256185217}" Mar 3 13:47:38.461182 containerd[1559]: time="2026-03-03T13:47:38.460457809Z" level=info msg="StopPodSandbox for \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\"" Mar 3 13:47:38.462650 containerd[1559]: time="2026-03-03T13:47:38.461619129Z" level=info msg="TearDown network for sandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" successfully" Mar 3 13:47:38.462650 containerd[1559]: time="2026-03-03T13:47:38.461779388Z" level=info msg="StopPodSandbox for \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" returns successfully" Mar 3 13:47:38.464393 containerd[1559]: time="2026-03-03T13:47:38.464230572Z" level=info msg="RemovePodSandbox for \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\"" Mar 3 13:47:38.464393 containerd[1559]: time="2026-03-03T13:47:38.464284693Z" level=info msg="Forcibly stopping sandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\"" Mar 3 13:47:38.464393 containerd[1559]: time="2026-03-03T13:47:38.464364783Z" level=info msg="TearDown network for sandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" successfully" Mar 3 13:47:38.466390 containerd[1559]: time="2026-03-03T13:47:38.466316480Z" level=info msg="Ensure that sandbox 4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c in task-service has been cleanup successfully" Mar 3 13:47:38.472306 containerd[1559]: time="2026-03-03T13:47:38.472208631Z" level=info msg="RemovePodSandbox \"4cdbe021e81eb6a50cfb858d1c0be5f4f9290c0ef8376a745aaeb5b9eb0c7b4c\" returns successfully" Mar 3 13:47:38.472867 containerd[1559]: time="2026-03-03T13:47:38.472803274Z" level=info msg="StopPodSandbox for \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\"" Mar 3 13:47:38.472938 containerd[1559]: time="2026-03-03T13:47:38.472913400Z" level=info msg="TearDown network for sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" successfully" Mar 3 13:47:38.472938 containerd[1559]: time="2026-03-03T13:47:38.472927426Z" level=info msg="StopPodSandbox for \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" returns successfully" Mar 3 13:47:38.473587 containerd[1559]: time="2026-03-03T13:47:38.473451300Z" level=info msg="RemovePodSandbox for \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\"" Mar 3 13:47:38.473587 containerd[1559]: time="2026-03-03T13:47:38.473508817Z" level=info msg="Forcibly stopping sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\"" Mar 3 13:47:38.473587 containerd[1559]: time="2026-03-03T13:47:38.473583005Z" level=info msg="TearDown network for sandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" successfully" Mar 3 13:47:38.475387 containerd[1559]: time="2026-03-03T13:47:38.475296316Z" level=info msg="Ensure that sandbox 5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec in task-service has been cleanup successfully" Mar 3 13:47:38.481465 containerd[1559]: time="2026-03-03T13:47:38.481331501Z" level=info msg="RemovePodSandbox \"5700321e4e742fbdc9776e1ce1c34435afe1b2b87665e166441147387b4b4aec\" returns successfully" Mar 3 13:47:38.585926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d7962fe64241845f081a5cd05a4d3d153ce29d0e3e5d7c6668faeb5e766009d-rootfs.mount: Deactivated successfully. Mar 3 13:47:39.017755 kubelet[2772]: E0303 13:47:39.017668 2772 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:47:39.018943 kubelet[2772]: E0303 13:47:39.018872 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:39.024691 containerd[1559]: time="2026-03-03T13:47:39.024546941Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 13:47:39.038726 containerd[1559]: time="2026-03-03T13:47:39.038671062Z" level=info msg="Container 7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:47:39.041382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257887114.mount: Deactivated successfully. Mar 3 13:47:39.047752 containerd[1559]: time="2026-03-03T13:47:39.047434529Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55\"" Mar 3 13:47:39.048245 containerd[1559]: time="2026-03-03T13:47:39.048203583Z" level=info msg="StartContainer for \"7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55\"" Mar 3 13:47:39.049539 containerd[1559]: time="2026-03-03T13:47:39.049476612Z" level=info msg="connecting to shim 7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55" address="unix:///run/containerd/s/fb0f41b5653a753c89148a23201d4c67c376610d3096652008dba84076640639" protocol=ttrpc version=3 Mar 3 13:47:39.072308 systemd[1]: Started cri-containerd-7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55.scope - libcontainer container 7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55. Mar 3 13:47:39.122670 systemd[1]: cri-containerd-7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55.scope: Deactivated successfully. Mar 3 13:47:39.124529 containerd[1559]: time="2026-03-03T13:47:39.124216670Z" level=info msg="received container exit event container_id:\"7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55\" id:\"7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55\" pid:4804 exited_at:{seconds:1772545659 nanos:123014383}" Mar 3 13:47:39.127255 containerd[1559]: time="2026-03-03T13:47:39.127201448Z" level=info msg="StartContainer for \"7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55\" returns successfully" Mar 3 13:47:39.156593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7518cf3f3d4c24b857496744f5356c0d8558a319848ea2c46b63971d34435d55-rootfs.mount: Deactivated successfully. Mar 3 13:47:39.491434 kubelet[2772]: E0303 13:47:39.491372 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-d2s4g" podUID="b95e3323-a88e-4239-ae6f-4f59538641e6" Mar 3 13:47:40.026812 kubelet[2772]: E0303 13:47:40.026664 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:40.036172 containerd[1559]: time="2026-03-03T13:47:40.033722591Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 13:47:40.053289 containerd[1559]: time="2026-03-03T13:47:40.053206751Z" level=info msg="Container 0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:47:40.064994 containerd[1559]: time="2026-03-03T13:47:40.064767918Z" level=info msg="CreateContainer within sandbox \"34f229ea8e1dda96dee457b6c12ffe9ba1410739f3b761fd1ae49b74388e2637\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc\"" Mar 3 13:47:40.065967 containerd[1559]: time="2026-03-03T13:47:40.065841906Z" level=info msg="StartContainer for \"0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc\"" Mar 3 13:47:40.067835 containerd[1559]: time="2026-03-03T13:47:40.067746744Z" level=info msg="connecting to shim 0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc" address="unix:///run/containerd/s/fb0f41b5653a753c89148a23201d4c67c376610d3096652008dba84076640639" protocol=ttrpc version=3 Mar 3 13:47:40.100328 systemd[1]: Started cri-containerd-0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc.scope - libcontainer container 0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc. Mar 3 13:47:40.172320 containerd[1559]: time="2026-03-03T13:47:40.172284760Z" level=info msg="StartContainer for \"0c16906ec6a28f1a4457e33df73684c2a5fbed9f6b9bbe1349c157fd20c772fc\" returns successfully" Mar 3 13:47:40.715217 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 3 13:47:41.039381 kubelet[2772]: E0303 13:47:41.038976 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:41.056367 kubelet[2772]: I0303 13:47:41.056266 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvbzj" podStartSLOduration=5.056249388 podStartE2EDuration="5.056249388s" podCreationTimestamp="2026-03-03 13:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:47:41.055802853 +0000 UTC m=+122.735318459" watchObservedRunningTime="2026-03-03 13:47:41.056249388 +0000 UTC m=+122.735764993" Mar 3 13:47:41.383706 kubelet[2772]: I0303 13:47:41.383418 2772 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-03T13:47:41Z","lastTransitionTime":"2026-03-03T13:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 3 13:47:41.491057 kubelet[2772]: E0303 13:47:41.490849 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-d2s4g" podUID="b95e3323-a88e-4239-ae6f-4f59538641e6" Mar 3 13:47:42.688451 kubelet[2772]: E0303 13:47:42.688343 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:43.491565 kubelet[2772]: E0303 13:47:43.491456 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-d2s4g" podUID="b95e3323-a88e-4239-ae6f-4f59538641e6" Mar 3 13:47:44.446597 systemd-networkd[1464]: lxc_health: Link UP Mar 3 13:47:44.453922 systemd-networkd[1464]: lxc_health: Gained carrier Mar 3 13:47:44.688343 kubelet[2772]: E0303 13:47:44.688291 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:45.052018 kubelet[2772]: E0303 13:47:45.051852 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:45.491455 kubelet[2772]: E0303 13:47:45.491375 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:45.886452 systemd-networkd[1464]: lxc_health: Gained IPv6LL Mar 3 13:47:46.053977 kubelet[2772]: E0303 13:47:46.053895 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:48.495298 kubelet[2772]: E0303 13:47:48.495191 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:51.690356 sshd[4601]: Connection closed by 10.0.0.1 port 56680 Mar 3 13:47:51.690780 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:51.696337 systemd[1]: sshd@30-10.0.0.81:22-10.0.0.1:56680.service: Deactivated successfully. Mar 3 13:47:51.699554 systemd[1]: session-31.scope: Deactivated successfully. Mar 3 13:47:51.701596 systemd-logind[1542]: Session 31 logged out. Waiting for processes to exit. Mar 3 13:47:51.707545 systemd-logind[1542]: Removed session 31.