Mar 12 01:28:02.252242 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:28:02.252275 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:28:02.252338 kernel: BIOS-provided physical RAM map: Mar 12 01:28:02.252347 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 01:28:02.252356 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 01:28:02.252364 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 01:28:02.252374 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 12 01:28:02.252383 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 12 01:28:02.252393 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:28:02.252408 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 01:28:02.252419 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 01:28:02.252427 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 01:28:02.252435 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 12 01:28:02.252446 kernel: NX (Execute Disable) protection: active Mar 12 01:28:02.252457 kernel: APIC: Static calls initialized Mar 12 01:28:02.252470 kernel: SMBIOS 2.8 present. Mar 12 01:28:02.252481 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 12 01:28:02.252490 kernel: Hypervisor detected: KVM Mar 12 01:28:02.252498 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:28:02.252509 kernel: kvm-clock: using sched offset of 9403896553 cycles Mar 12 01:28:02.252520 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:28:02.252528 kernel: tsc: Detected 2445.426 MHz processor Mar 12 01:28:02.252540 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:28:02.252551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:28:02.252566 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 12 01:28:02.252627 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 01:28:02.252640 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:28:02.252649 kernel: Using GB pages for direct mapping Mar 12 01:28:02.252658 kernel: ACPI: Early table checksum verification disabled Mar 12 01:28:02.252669 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 12 01:28:02.252679 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252689 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252699 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252714 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 12 01:28:02.252724 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252733 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252743 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252753 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:28:02.252763 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 12 01:28:02.252772 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 12 01:28:02.252789 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 12 01:28:02.252803 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 12 01:28:02.252813 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 12 01:28:02.252824 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 12 01:28:02.252834 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 12 01:28:02.252843 kernel: No NUMA configuration found Mar 12 01:28:02.252854 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 12 01:28:02.252863 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 12 01:28:02.252880 kernel: Zone ranges: Mar 12 01:28:02.252889 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:28:02.252900 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 12 01:28:02.252911 kernel: Normal empty Mar 12 01:28:02.252920 kernel: Movable zone start for each node Mar 12 01:28:02.252931 kernel: Early memory node ranges Mar 12 01:28:02.252942 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 01:28:02.252976 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 12 01:28:02.252987 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 12 01:28:02.253001 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:28:02.253011 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 01:28:02.253022 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 12 01:28:02.253034 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:28:02.253045 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:28:02.253056 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:28:02.253068 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:28:02.253078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:28:02.253089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:28:02.253106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:28:02.253118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:28:02.253128 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:28:02.253136 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:28:02.253148 kernel: TSC deadline timer available Mar 12 01:28:02.253160 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:28:02.253169 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:28:02.253179 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:28:02.253224 kernel: kvm-guest: setup PV sched yield Mar 12 01:28:02.253241 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 01:28:02.253250 kernel: Booting paravirtualized kernel on KVM Mar 12 01:28:02.253260 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:28:02.253273 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:28:02.253282 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:28:02.253292 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:28:02.253304 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:28:02.253313 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:28:02.253324 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:28:02.253340 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:28:02.253350 kernel: random: crng init done Mar 12 01:28:02.253360 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:28:02.253370 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:28:02.253380 kernel: Fallback order for Node 0: 0 Mar 12 01:28:02.253391 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 12 01:28:02.253402 kernel: Policy zone: DMA32 Mar 12 01:28:02.253411 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:28:02.253427 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 12 01:28:02.253438 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:28:02.253470 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:28:02.253482 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:28:02.253491 kernel: Dynamic Preempt: voluntary Mar 12 01:28:02.253502 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:28:02.253514 kernel: rcu: RCU event tracing is enabled. Mar 12 01:28:02.253524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:28:02.253535 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:28:02.253550 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:28:02.253561 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:28:02.253572 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:28:02.253629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:28:02.253640 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:28:02.253649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:28:02.253660 kernel: Console: colour VGA+ 80x25 Mar 12 01:28:02.253670 kernel: printk: console [ttyS0] enabled Mar 12 01:28:02.253680 kernel: ACPI: Core revision 20230628 Mar 12 01:28:02.253690 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:28:02.253706 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:28:02.253717 kernel: x2apic enabled Mar 12 01:28:02.253729 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:28:02.253740 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:28:02.253752 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:28:02.253764 kernel: kvm-guest: setup PV IPIs Mar 12 01:28:02.253775 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:28:02.253805 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:28:02.253815 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 12 01:28:02.253825 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:28:02.253838 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:28:02.253853 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:28:02.253864 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:28:02.253876 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:28:02.253887 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:28:02.253896 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:28:02.253911 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:28:02.253923 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:28:02.253935 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:28:02.253945 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:28:02.253957 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:28:02.253969 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:28:02.253978 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:28:02.253991 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:28:02.254005 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:28:02.254016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:28:02.254027 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:28:02.254038 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:28:02.254050 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:28:02.254061 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:28:02.254071 kernel: landlock: Up and running. Mar 12 01:28:02.254083 kernel: SELinux: Initializing. Mar 12 01:28:02.254095 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:28:02.254109 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:28:02.254121 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:28:02.254132 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:28:02.254142 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:28:02.254154 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:28:02.254164 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:28:02.254175 kernel: signal: max sigframe size: 1776 Mar 12 01:28:02.254219 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:28:02.254231 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:28:02.254250 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:28:02.254260 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:28:02.254271 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:28:02.254281 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:28:02.254291 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:28:02.254304 kernel: smpboot: Max logical packages: 1 Mar 12 01:28:02.254314 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 12 01:28:02.254324 kernel: devtmpfs: initialized Mar 12 01:28:02.254337 kernel: x86/mm: Memory block size: 128MB Mar 12 01:28:02.254351 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:28:02.254364 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:28:02.254374 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:28:02.254385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:28:02.254396 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:28:02.254406 kernel: audit: type=2000 audit(1773278879.616:1): state=initialized audit_enabled=0 res=1 Mar 12 01:28:02.254417 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:28:02.254427 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:28:02.254438 kernel: cpuidle: using governor menu Mar 12 01:28:02.254455 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:28:02.254467 kernel: dca service started, version 1.12.1 Mar 12 01:28:02.254479 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:28:02.254491 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:28:02.254503 kernel: PCI: Using configuration type 1 for base access Mar 12 01:28:02.254515 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:28:02.254527 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:28:02.254538 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:28:02.254547 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:28:02.254565 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:28:02.254629 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:28:02.254643 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:28:02.254653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:28:02.254663 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:28:02.254676 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:28:02.254686 kernel: ACPI: Interpreter enabled Mar 12 01:28:02.254697 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:28:02.254709 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:28:02.254725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:28:02.254737 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:28:02.254747 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:28:02.254757 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:28:02.255048 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:28:02.255296 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:28:02.255486 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:28:02.255511 kernel: PCI host bridge to bus 0000:00 Mar 12 01:28:02.255755 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:28:02.255932 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:28:02.256147 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:28:02.256355 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:28:02.256705 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:28:02.256891 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 12 01:28:02.257071 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:28:02.257391 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:28:02.257733 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:28:02.257923 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 12 01:28:02.258108 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 12 01:28:02.258341 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 12 01:28:02.258673 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:28:02.258884 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:28:02.259108 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 01:28:02.259331 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 12 01:28:02.259517 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 12 01:28:02.259784 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:28:02.259981 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 01:28:02.260166 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 12 01:28:02.260392 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 12 01:28:02.260818 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:28:02.261042 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 12 01:28:02.261273 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 12 01:28:02.261464 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 12 01:28:02.261710 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 12 01:28:02.261903 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:28:02.262094 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:28:02.262376 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:28:02.262890 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 12 01:28:02.263451 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 12 01:28:02.263887 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:28:02.264179 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 01:28:02.264235 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:28:02.264249 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:28:02.264259 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:28:02.264272 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:28:02.264284 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:28:02.264294 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:28:02.264306 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:28:02.264317 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:28:02.264327 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:28:02.264365 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:28:02.264378 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:28:02.264388 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:28:02.264418 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:28:02.264431 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:28:02.264443 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:28:02.264453 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:28:02.264465 kernel: iommu: Default domain type: Translated Mar 12 01:28:02.264475 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:28:02.264491 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:28:02.264502 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:28:02.264514 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 01:28:02.264526 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 12 01:28:02.264784 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:28:02.264969 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:28:02.265151 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:28:02.265171 kernel: vgaarb: loaded Mar 12 01:28:02.265225 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:28:02.265238 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:28:02.265251 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:28:02.265261 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:28:02.265273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:28:02.265283 kernel: pnp: PnP ACPI init Mar 12 01:28:02.265564 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:28:02.265681 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:28:02.265699 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:28:02.265712 kernel: NET: Registered PF_INET protocol family Mar 12 01:28:02.265724 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:28:02.265734 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:28:02.265746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:28:02.265758 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:28:02.265770 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:28:02.265780 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:28:02.265792 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:28:02.265810 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:28:02.265821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:28:02.265833 kernel: NET: Registered PF_XDP protocol family Mar 12 01:28:02.266007 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:28:02.266232 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:28:02.266523 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:28:02.266792 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:28:02.267013 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:28:02.267293 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 12 01:28:02.267310 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:28:02.267322 kernel: Initialise system trusted keyrings Mar 12 01:28:02.267333 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:28:02.267345 kernel: Key type asymmetric registered Mar 12 01:28:02.267355 kernel: Asymmetric key parser 'x509' registered Mar 12 01:28:02.267365 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:28:02.267377 kernel: io scheduler mq-deadline registered Mar 12 01:28:02.267387 kernel: io scheduler kyber registered Mar 12 01:28:02.267402 kernel: io scheduler bfq registered Mar 12 01:28:02.267411 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:28:02.267423 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:28:02.267433 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:28:02.267444 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:28:02.267454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:28:02.267465 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:28:02.267475 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:28:02.267484 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:28:02.267493 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:28:02.268890 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:28:02.269170 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:28:02.269379 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:28:01 UTC (1773278881) Mar 12 01:28:02.269552 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:28:02.269568 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:28:02.269778 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:28:02.269793 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:28:02.269812 kernel: Segment Routing with IPv6 Mar 12 01:28:02.269822 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:28:02.269834 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:28:02.269847 kernel: Key type dns_resolver registered Mar 12 01:28:02.269858 kernel: IPI shorthand broadcast: enabled Mar 12 01:28:02.269868 kernel: sched_clock: Marking stable (1483026270, 400008383)->(2277183001, -394148348) Mar 12 01:28:02.269881 kernel: registered taskstats version 1 Mar 12 01:28:02.269891 kernel: Loading compiled-in X.509 certificates Mar 12 01:28:02.269901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:28:02.269942 kernel: Key type .fscrypt registered Mar 12 01:28:02.269952 kernel: Key type fscrypt-provisioning registered Mar 12 01:28:02.269962 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:28:02.269974 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:28:02.269984 kernel: ima: No architecture policies found Mar 12 01:28:02.269997 kernel: clk: Disabling unused clocks Mar 12 01:28:02.270008 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:28:02.270018 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:28:02.270030 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:28:02.270046 kernel: Run /init as init process Mar 12 01:28:02.270085 kernel: with arguments: Mar 12 01:28:02.270096 kernel: /init Mar 12 01:28:02.270106 kernel: with environment: Mar 12 01:28:02.270118 kernel: HOME=/ Mar 12 01:28:02.270129 kernel: TERM=linux Mar 12 01:28:02.270145 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:28:02.270157 systemd[1]: Detected virtualization kvm. Mar 12 01:28:02.270176 systemd[1]: Detected architecture x86-64. Mar 12 01:28:02.270219 systemd[1]: Running in initrd. Mar 12 01:28:02.270230 systemd[1]: No hostname configured, using default hostname. Mar 12 01:28:02.270243 systemd[1]: Hostname set to . Mar 12 01:28:02.270254 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:28:02.270265 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:28:02.270275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:28:02.270288 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:28:02.270305 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:28:02.270319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:28:02.270330 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:28:02.270342 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:28:02.270356 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:28:02.270369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:28:02.270382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:28:02.270398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:28:02.270410 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:28:02.270422 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:28:02.270432 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:28:02.270463 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:28:02.270481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:28:02.270498 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:28:02.270512 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:28:02.270531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:28:02.270545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:28:02.270556 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:28:02.270567 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:28:02.270675 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:28:02.270689 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:28:02.270703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:28:02.270720 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:28:02.270734 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:28:02.270766 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:28:02.270778 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:28:02.270790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:28:02.270802 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:28:02.270845 systemd-journald[194]: Collecting audit messages is disabled. Mar 12 01:28:02.270880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:28:02.270893 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:28:02.270909 systemd-journald[194]: Journal started Mar 12 01:28:02.270934 systemd-journald[194]: Runtime Journal (/run/log/journal/11d80b9ef1eb4342abfd74c686ed3c6a) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:28:02.272361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:28:02.264454 systemd-modules-load[195]: Inserted module 'overlay' Mar 12 01:28:02.290535 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:28:02.291748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:28:02.472413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:28:02.472446 kernel: Bridge firewalling registered Mar 12 01:28:02.315020 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 12 01:28:02.476147 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:28:02.484237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:28:02.506940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:28:02.508376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:28:02.519679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:28:02.521261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:28:02.569345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:28:02.582498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:28:02.582995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:28:02.600331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:28:02.619057 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:28:02.631450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:28:02.643051 dracut-cmdline[228]: dracut-dracut-053 Mar 12 01:28:02.650007 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:28:02.685457 systemd-resolved[235]: Positive Trust Anchors: Mar 12 01:28:02.685932 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:28:02.687082 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:28:02.691845 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 12 01:28:02.693464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:28:02.726433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:28:02.772695 kernel: SCSI subsystem initialized Mar 12 01:28:02.786770 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:28:02.802677 kernel: iscsi: registered transport (tcp) Mar 12 01:28:02.835466 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:28:02.835550 kernel: QLogic iSCSI HBA Driver Mar 12 01:28:02.914316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:28:02.938479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:28:02.978425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:28:02.978493 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:28:02.978505 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:28:03.032709 kernel: raid6: avx2x4 gen() 23802 MB/s Mar 12 01:28:03.050686 kernel: raid6: avx2x2 gen() 23833 MB/s Mar 12 01:28:03.072220 kernel: raid6: avx2x1 gen() 18854 MB/s Mar 12 01:28:03.072313 kernel: raid6: using algorithm avx2x2 gen() 23833 MB/s Mar 12 01:28:03.093252 kernel: raid6: .... xor() 23079 MB/s, rmw enabled Mar 12 01:28:03.093335 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:28:03.119666 kernel: xor: automatically using best checksumming function avx Mar 12 01:28:03.356651 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:28:03.380267 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:28:03.394877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:28:03.414118 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 12 01:28:03.421230 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:28:03.425811 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:28:03.453976 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Mar 12 01:28:03.509544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:28:03.527342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:28:03.622860 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:28:03.642879 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:28:03.664173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:28:03.671846 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:28:03.676442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:28:03.679911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:28:03.703932 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:28:03.714859 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:28:03.718399 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:28:03.735776 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:28:03.718567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:28:03.752454 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:28:03.761461 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:28:03.761496 kernel: GPT:9289727 != 19775487 Mar 12 01:28:03.761515 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:28:03.724177 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:28:03.778066 kernel: GPT:9289727 != 19775487 Mar 12 01:28:03.778151 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:28:03.778422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:28:03.731715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:28:03.736082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:28:03.752554 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:28:03.789141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:28:03.798128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:28:03.818894 kernel: libata version 3.00 loaded. Mar 12 01:28:03.833944 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:28:03.834408 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:28:03.838772 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:28:03.839108 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:28:03.847249 kernel: scsi host0: ahci Mar 12 01:28:03.847863 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:28:03.847959 kernel: AES CTR mode by8 optimization enabled Mar 12 01:28:03.847984 kernel: scsi host1: ahci Mar 12 01:28:03.848853 kernel: scsi host2: ahci Mar 12 01:28:03.849152 kernel: scsi host3: ahci Mar 12 01:28:03.849953 kernel: scsi host4: ahci Mar 12 01:28:03.854497 kernel: scsi host5: ahci Mar 12 01:28:03.854813 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 12 01:28:03.854833 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 12 01:28:03.854850 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 12 01:28:03.854865 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 12 01:28:03.854881 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 12 01:28:03.854896 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 12 01:28:03.886770 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (462) Mar 12 01:28:03.891388 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Mar 12 01:28:03.905986 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:28:04.063542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:28:04.079761 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:28:04.080325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:28:04.094546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:28:04.099336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:28:04.136308 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:28:04.152096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:28:04.174689 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:28:04.175894 disk-uuid[556]: Primary Header is updated. Mar 12 01:28:04.175894 disk-uuid[556]: Secondary Entries is updated. Mar 12 01:28:04.175894 disk-uuid[556]: Secondary Header is updated. Mar 12 01:28:04.224905 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:28:04.224942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:28:04.224959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:28:04.224975 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:28:04.225000 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:28:04.225014 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:28:04.225028 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:28:04.225043 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:28:04.236427 kernel: ata3.00: applying bridge limits Mar 12 01:28:04.236498 kernel: ata3.00: configured for UDMA/100 Mar 12 01:28:04.248428 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:28:04.291865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:28:04.352619 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:28:04.353411 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:28:04.389797 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:28:05.210968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:28:05.215361 disk-uuid[557]: The operation has completed successfully. Mar 12 01:28:05.325800 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:28:05.326011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:28:05.400144 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:28:05.436714 sh[594]: Success Mar 12 01:28:05.467010 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:28:05.594079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:28:05.601969 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:28:05.607915 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:28:05.633638 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:28:05.633712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:28:05.633732 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:28:05.639553 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:28:05.643340 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:28:05.662329 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:28:05.663344 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:28:05.693310 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:28:05.697879 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:28:05.719007 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:28:05.719052 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:28:05.719063 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:28:05.727695 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:28:05.745683 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:28:05.753843 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:28:05.760663 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:28:05.791649 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:28:05.871335 ignition[701]: Ignition 2.19.0 Mar 12 01:28:05.871407 ignition[701]: Stage: fetch-offline Mar 12 01:28:05.871470 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:28:05.871485 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:28:05.871803 ignition[701]: parsed url from cmdline: "" Mar 12 01:28:05.871810 ignition[701]: no config URL provided Mar 12 01:28:05.871819 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:28:05.871836 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:28:05.871875 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 12 01:28:05.871884 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:28:05.894051 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 12 01:28:05.921222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:28:05.933920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:28:05.981136 systemd-networkd[783]: lo: Link UP Mar 12 01:28:05.981171 systemd-networkd[783]: lo: Gained carrier Mar 12 01:28:05.984264 systemd-networkd[783]: Enumeration completed Mar 12 01:28:05.985533 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:28:05.985541 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:28:05.986750 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:28:05.989805 systemd-networkd[783]: eth0: Link UP Mar 12 01:28:05.989811 systemd-networkd[783]: eth0: Gained carrier Mar 12 01:28:05.989822 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:28:05.993779 systemd[1]: Reached target network.target - Network. Mar 12 01:28:06.034706 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:28:06.178675 ignition[701]: parsing config with SHA512: 943f96571ae583e68c29dc2d99107da01b383ec3018cd3473d42aeb66fe0f8a072e6ea5bc6be6ead6a4834363f51e7e544062dc3a55c70985ee525b8fa8d90da Mar 12 01:28:06.254524 unknown[701]: fetched base config from "system" Mar 12 01:28:06.254564 unknown[701]: fetched user config from "qemu" Mar 12 01:28:06.256097 ignition[701]: fetch-offline: fetch-offline passed Mar 12 01:28:06.256256 ignition[701]: Ignition finished successfully Mar 12 01:28:06.277967 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:28:06.286506 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:28:06.313010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:28:06.436146 ignition[787]: Ignition 2.19.0 Mar 12 01:28:06.436231 ignition[787]: Stage: kargs Mar 12 01:28:06.436967 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:28:06.436987 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:28:06.443803 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:28:06.438672 ignition[787]: kargs: kargs passed Mar 12 01:28:06.492895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:28:06.438744 ignition[787]: Ignition finished successfully Mar 12 01:28:06.533476 ignition[795]: Ignition 2.19.0 Mar 12 01:28:06.533511 ignition[795]: Stage: disks Mar 12 01:28:06.533843 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:28:06.538060 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:28:06.533866 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:28:06.546396 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:28:06.535044 ignition[795]: disks: disks passed Mar 12 01:28:06.551560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:28:06.535113 ignition[795]: Ignition finished successfully Mar 12 01:28:06.552368 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:28:06.588556 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:28:06.601465 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:28:06.624070 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:28:06.660904 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:28:06.697458 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:28:06.717084 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:28:06.986831 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:28:06.987410 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:28:06.990999 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:28:07.009779 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:28:07.016524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:28:07.036410 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 12 01:28:07.021753 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:28:07.063017 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:28:07.063061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:28:07.063078 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:28:07.021825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:28:07.086491 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:28:07.021863 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:28:07.038664 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:28:07.066769 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:28:07.088312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:28:07.177394 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:28:07.254846 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:28:07.269948 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:28:07.280034 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:28:07.512347 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:28:07.537092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:28:07.549125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:28:07.592022 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:28:07.592044 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:28:07.644778 systemd-networkd[783]: eth0: Gained IPv6LL Mar 12 01:28:07.660366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:28:07.716948 ignition[927]: INFO : Ignition 2.19.0 Mar 12 01:28:07.716948 ignition[927]: INFO : Stage: mount Mar 12 01:28:07.722078 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:28:07.722078 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:28:07.732419 ignition[927]: INFO : mount: mount passed Mar 12 01:28:07.736639 ignition[927]: INFO : Ignition finished successfully Mar 12 01:28:07.741065 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:28:07.767060 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:28:08.007104 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:28:08.031881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 12 01:28:08.045061 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:28:08.045742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:28:08.049737 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:28:08.090316 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:28:08.099171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:28:08.193761 kernel: hrtimer: interrupt took 6259617 ns Mar 12 01:28:08.223842 ignition[957]: INFO : Ignition 2.19.0 Mar 12 01:28:08.223842 ignition[957]: INFO : Stage: files Mar 12 01:28:08.238485 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:28:08.238485 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:28:08.238485 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:28:08.254968 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:28:08.254968 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:28:08.294660 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:28:08.301534 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:28:08.301534 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:28:08.299463 unknown[957]: wrote ssh authorized keys file for user: core Mar 12 01:28:08.314563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:28:08.314563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:28:08.455391 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:28:08.885132 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:28:08.885132 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 01:28:08.885132 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 12 01:28:09.237120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 01:28:10.024170 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 01:28:10.024170 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:28:10.046098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 12 01:28:10.346236 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 01:28:11.372564 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:28:11.372564 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 12 01:28:11.399315 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:28:11.530308 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:28:11.563031 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:28:11.578712 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:28:11.578712 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:28:11.578712 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:28:11.578712 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:28:11.578712 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:28:11.578712 ignition[957]: INFO : files: files passed Mar 12 01:28:11.578712 ignition[957]: INFO : Ignition finished successfully Mar 12 01:28:11.570656 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:28:11.618136 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:28:11.633872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:28:11.642047 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:28:11.642193 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:28:11.658619 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:28:11.664297 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:28:11.664297 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:28:11.678793 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:28:11.695049 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:28:11.704325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:28:11.722882 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:28:11.798383 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:28:11.798549 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:28:11.805258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:28:11.809354 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:28:11.811324 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:28:11.834957 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:28:11.885713 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:28:11.904703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:28:11.920854 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:28:11.926693 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:28:11.932892 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:28:11.933053 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:28:11.933352 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:28:11.934306 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:28:11.936249 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:28:11.936692 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:28:12.102185 ignition[1012]: INFO : Ignition 2.19.0 Mar 12 01:28:12.102185 ignition[1012]: INFO : Stage: umount Mar 12 01:28:12.102185 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:28:12.102185 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:28:12.102185 ignition[1012]: INFO : umount: umount passed Mar 12 01:28:12.102185 ignition[1012]: INFO : Ignition finished successfully Mar 12 01:28:11.937303 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:28:11.938389 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:28:11.939336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:28:11.941541 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:28:11.942429 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:28:11.943406 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:28:11.944364 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:28:11.945444 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:28:11.945747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:28:11.947145 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:28:11.947477 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:28:11.950954 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:28:11.951240 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:28:11.952399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:28:11.952668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:28:11.953312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:28:11.953516 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:28:11.954541 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:28:11.955306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:28:11.961446 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:28:11.963557 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:28:11.964842 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:28:11.965458 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:28:11.965679 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:28:11.965960 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:28:11.966102 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:28:11.976371 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:28:11.976695 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:28:11.980625 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:28:11.981509 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:28:12.044382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:28:12.050917 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:28:12.054764 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:28:12.055159 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:28:12.064705 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:28:12.065185 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:28:12.091356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:28:12.091499 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:28:12.102535 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:28:12.507794 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 12 01:28:12.102756 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:28:12.112992 systemd[1]: Stopped target network.target - Network. Mar 12 01:28:12.116331 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:28:12.116446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:28:12.117551 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:28:12.117667 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:28:12.118661 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:28:12.118723 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:28:12.120333 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:28:12.120397 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:28:12.121858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:28:12.124001 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:28:12.130059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:28:12.142089 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:28:12.142382 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:28:12.147789 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 12 01:28:12.149231 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:28:12.150412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:28:12.159389 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:28:12.159709 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:28:12.183330 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:28:12.183570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:28:12.193560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:28:12.193698 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:28:12.200679 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:28:12.200772 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:28:12.225724 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:28:12.230110 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:28:12.230193 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:28:12.241500 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:28:12.241652 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:28:12.249525 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:28:12.249717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:28:12.259298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:28:12.309110 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:28:12.310682 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:28:12.319060 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:28:12.319355 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:28:12.333895 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:28:12.334095 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:28:12.340740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:28:12.340817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:28:12.341007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:28:12.341089 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:28:12.344101 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:28:12.344354 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:28:12.345526 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:28:12.345646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:28:12.386014 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:28:12.396876 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:28:12.397055 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:28:12.407361 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 01:28:12.407464 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:28:12.418658 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:28:12.418760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:28:12.418956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:28:12.419026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:28:12.420966 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:28:12.421143 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:28:12.422367 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:28:12.425663 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:28:12.446901 systemd[1]: Switching root. Mar 12 01:28:12.766493 systemd-journald[194]: Journal stopped Mar 12 01:28:14.430106 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:28:14.430244 kernel: SELinux: policy capability open_perms=1 Mar 12 01:28:14.430269 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:28:14.430287 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:28:14.430302 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:28:14.430318 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:28:14.430848 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:28:14.430869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:28:14.430895 kernel: audit: type=1403 audit(1773278892.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:28:14.430921 systemd[1]: Successfully loaded SELinux policy in 78.319ms. Mar 12 01:28:14.430952 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.445ms. Mar 12 01:28:14.430970 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:28:14.430988 systemd[1]: Detected virtualization kvm. Mar 12 01:28:14.431004 systemd[1]: Detected architecture x86-64. Mar 12 01:28:14.431021 systemd[1]: Detected first boot. Mar 12 01:28:14.431038 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:28:14.431054 zram_generator::config[1057]: No configuration found. Mar 12 01:28:14.431076 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:28:14.431094 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:28:14.431114 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:28:14.431130 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:28:14.431148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:28:14.431166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:28:14.431186 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:28:14.431247 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:28:14.431272 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:28:14.431290 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:28:14.431309 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:28:14.431328 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:28:14.431350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:28:14.431367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:28:14.431384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:28:14.431402 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:28:14.431420 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:28:14.431444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:28:14.431463 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:28:14.431480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:28:14.431500 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:28:14.431516 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:28:14.431534 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:28:14.431553 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:28:14.431634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:28:14.431658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:28:14.431678 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:28:14.431696 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:28:14.431713 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:28:14.431734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:28:14.431750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:28:14.431768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:28:14.431788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:28:14.431805 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:28:14.431831 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:28:14.431851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:28:14.431869 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:28:14.431888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:14.431905 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:28:14.431922 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:28:14.431940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:28:14.431959 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:28:14.431981 systemd[1]: Reached target machines.target - Containers. Mar 12 01:28:14.431998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:28:14.432017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:28:14.432034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:28:14.432051 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:28:14.432068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:28:14.432087 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:28:14.432104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:28:14.432121 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:28:14.432142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:28:14.432159 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:28:14.432177 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:28:14.432239 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:28:14.432259 kernel: fuse: init (API version 7.39) Mar 12 01:28:14.432278 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:28:14.432294 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:28:14.432311 kernel: ACPI: bus type drm_connector registered Mar 12 01:28:14.432333 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:28:14.432350 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:28:14.432367 kernel: loop: module loaded Mar 12 01:28:14.432384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:28:14.432401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:28:14.432450 systemd-journald[1141]: Collecting audit messages is disabled. Mar 12 01:28:14.432488 systemd-journald[1141]: Journal started Mar 12 01:28:14.432520 systemd-journald[1141]: Runtime Journal (/run/log/journal/11d80b9ef1eb4342abfd74c686ed3c6a) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:28:13.796000 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:28:13.828112 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:28:13.829072 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:28:13.829769 systemd[1]: systemd-journald.service: Consumed 2.089s CPU time. Mar 12 01:28:14.442305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:28:14.453662 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:28:14.453775 systemd[1]: Stopped verity-setup.service. Mar 12 01:28:14.453806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:14.464501 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:28:14.466024 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:28:14.481928 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:28:14.486696 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:28:14.490748 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:28:14.495068 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:28:14.499152 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:28:14.503322 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:28:14.508489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:28:14.513991 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:28:14.514321 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:28:14.519462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:28:14.519799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:28:14.524462 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:28:14.524778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:28:14.529521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:28:14.529811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:28:14.535077 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:28:14.535403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:28:14.541363 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:28:14.541684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:28:14.546508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:28:14.551410 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:28:14.556467 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:28:14.585809 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:28:14.601777 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:28:14.608027 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:28:14.612290 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:28:14.612360 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:28:14.617835 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:28:14.624393 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:28:14.630985 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:28:14.635424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:28:14.637858 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:28:14.643932 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:28:14.648016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:28:14.652268 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:28:14.656810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:28:14.658549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:28:14.668153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:28:14.681572 systemd-journald[1141]: Time spent on flushing to /var/log/journal/11d80b9ef1eb4342abfd74c686ed3c6a is 40.329ms for 947 entries. Mar 12 01:28:14.681572 systemd-journald[1141]: System Journal (/var/log/journal/11d80b9ef1eb4342abfd74c686ed3c6a) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:28:15.031873 systemd-journald[1141]: Received client request to flush runtime journal. Mar 12 01:28:15.038660 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 01:28:15.119801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:28:14.690832 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:28:14.696858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:28:14.701090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:28:14.732878 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:28:14.798132 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:28:14.890950 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:28:14.908965 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:28:14.917390 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:28:14.930894 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:28:14.936134 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:28:14.948381 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 12 01:28:15.042477 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:28:15.146424 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:28:15.146537 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 12 01:28:15.146556 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 12 01:28:15.148095 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:28:15.160679 kernel: loop1: detected capacity change from 0 to 140768 Mar 12 01:28:15.215706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:28:15.229843 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:28:15.249770 kernel: loop2: detected capacity change from 0 to 217752 Mar 12 01:28:15.547678 kernel: loop3: detected capacity change from 0 to 142488 Mar 12 01:28:15.548457 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:28:15.602439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:28:15.658965 kernel: loop4: detected capacity change from 0 to 140768 Mar 12 01:28:15.733653 kernel: loop5: detected capacity change from 0 to 217752 Mar 12 01:28:15.738991 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Mar 12 01:28:15.739008 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Mar 12 01:28:15.747722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:28:15.757809 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:28:15.758833 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 12 01:28:15.765190 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:28:15.765247 systemd[1]: Reloading... Mar 12 01:28:16.083633 zram_generator::config[1224]: No configuration found. Mar 12 01:28:16.425636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:28:16.593893 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:28:16.600903 systemd[1]: Reloading finished in 835 ms. Mar 12 01:28:16.632788 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:28:16.637258 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:28:16.684285 systemd[1]: Starting ensure-sysext.service... Mar 12 01:28:16.691544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:28:16.716701 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:28:16.716754 systemd[1]: Reloading... Mar 12 01:28:16.844027 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:28:16.844459 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:28:16.845494 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:28:16.850282 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Mar 12 01:28:16.850445 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Mar 12 01:28:16.905805 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:28:16.905822 systemd-tmpfiles[1263]: Skipping /boot Mar 12 01:28:17.193702 zram_generator::config[1292]: No configuration found. Mar 12 01:28:17.224517 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:28:17.224735 systemd-tmpfiles[1263]: Skipping /boot Mar 12 01:28:17.425571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:28:17.495390 systemd[1]: Reloading finished in 777 ms. Mar 12 01:28:17.519448 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:28:17.542879 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:28:17.549952 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:28:17.557041 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:28:17.563670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:28:17.588955 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:28:17.606009 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:28:17.612129 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:17.612404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:28:17.614405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:28:17.629970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:28:17.648170 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:28:17.653325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:28:17.653465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:17.654853 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:28:17.660969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:28:17.661361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:28:17.666858 augenrules[1351]: No rules Mar 12 01:28:17.667964 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:28:17.680355 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:28:17.686425 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:28:17.699011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:28:17.699187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:28:17.703765 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:28:17.704033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:28:17.719978 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:17.720347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:28:17.730113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:28:17.735779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:28:17.742154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:28:17.746263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:28:17.748661 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:28:17.753890 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:28:17.757406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:17.760180 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:28:17.764460 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:28:17.770525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:28:17.771286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:28:17.775566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:28:17.775813 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:28:17.780269 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:28:17.780551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:28:17.784471 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:28:17.792296 systemd-resolved[1332]: Positive Trust Anchors: Mar 12 01:28:17.792342 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:28:17.792388 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:28:17.796918 systemd-udevd[1370]: Using default interface naming scheme 'v255'. Mar 12 01:28:17.797422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:17.798064 systemd-resolved[1332]: Defaulting to hostname 'linux'. Mar 12 01:28:17.798337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:28:17.814882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:28:17.819774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:28:17.825682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:28:17.831429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:28:17.836869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:28:17.837021 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:28:17.837093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:28:17.837875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:28:17.841807 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:28:17.848929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:28:17.849181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:28:17.855364 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:28:17.856094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:28:17.862450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:28:17.862826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:28:17.868888 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:28:17.869164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:28:17.878815 systemd[1]: Finished ensure-sysext.service. Mar 12 01:28:17.900726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:28:17.904640 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1404) Mar 12 01:28:17.918837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:28:17.923431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:28:17.923559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:28:17.926761 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:28:17.931518 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:28:17.995115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:28:18.018685 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:28:18.022164 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:28:18.030418 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:28:18.062449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:28:18.076655 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:28:18.083438 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:28:18.083932 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:28:18.090899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:28:18.096622 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:28:18.097761 systemd-networkd[1406]: lo: Link UP Mar 12 01:28:18.098097 systemd-networkd[1406]: lo: Gained carrier Mar 12 01:28:18.103553 systemd-networkd[1406]: Enumeration completed Mar 12 01:28:18.103820 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:28:18.105723 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:28:18.105815 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:28:18.108136 systemd[1]: Reached target network.target - Network. Mar 12 01:28:18.111289 systemd-networkd[1406]: eth0: Link UP Mar 12 01:28:18.111350 systemd-networkd[1406]: eth0: Gained carrier Mar 12 01:28:18.111438 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:28:18.119808 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:28:18.129308 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:28:18.138803 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:28:18.140969 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 12 01:28:18.144799 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:28:18.144879 systemd-timesyncd[1409]: Initial clock synchronization to Thu 2026-03-12 01:28:18.329194 UTC. Mar 12 01:28:18.201782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:28:18.257688 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:28:18.284117 kernel: kvm_amd: TSC scaling supported Mar 12 01:28:18.284262 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:28:18.284321 kernel: kvm_amd: Nested Paging enabled Mar 12 01:28:18.288132 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:28:18.288263 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:28:18.354629 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:28:18.385936 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:28:18.466798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:28:18.485825 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:28:18.501150 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:28:18.535494 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:28:18.539437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:28:18.542680 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:28:18.545831 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:28:18.549953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:28:18.554027 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:28:18.557497 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:28:18.562153 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:28:18.567437 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:28:18.567521 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:28:18.571974 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:28:18.576396 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:28:18.582487 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:28:18.593439 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:28:18.599484 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:28:18.604262 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:28:18.608432 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:28:18.611853 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:28:18.614409 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:28:18.615426 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:28:18.615492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:28:18.617087 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:28:18.622142 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:28:18.629555 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:28:18.634823 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:28:18.638758 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:28:18.640505 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:28:18.645445 jq[1440]: false Mar 12 01:28:18.646777 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:28:18.652846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:28:18.662862 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:28:18.672819 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:28:18.677253 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:28:18.678982 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:28:18.680298 extend-filesystems[1441]: Found loop3 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found loop4 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found loop5 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found sr0 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda1 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda2 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda3 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found usr Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda4 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda6 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda7 Mar 12 01:28:18.680298 extend-filesystems[1441]: Found vda9 Mar 12 01:28:18.680298 extend-filesystems[1441]: Checking size of /dev/vda9 Mar 12 01:28:18.733100 extend-filesystems[1441]: Resized partition /dev/vda9 Mar 12 01:28:18.749729 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:28:18.749777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1392) Mar 12 01:28:18.680317 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:28:18.684181 dbus-daemon[1439]: [system] SELinux support is enabled Mar 12 01:28:18.750163 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:28:18.688016 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:28:18.753717 update_engine[1455]: I20260312 01:28:18.718923 1455 main.cc:92] Flatcar Update Engine starting Mar 12 01:28:18.753717 update_engine[1455]: I20260312 01:28:18.721140 1455 update_check_scheduler.cc:74] Next update check in 10m2s Mar 12 01:28:18.689685 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:28:18.701293 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:28:18.754237 jq[1457]: true Mar 12 01:28:18.712261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:28:18.713516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:28:18.714251 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:28:18.714754 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:28:18.725248 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:28:18.725440 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:28:18.765020 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:28:18.769300 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:28:18.807684 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:28:18.807795 jq[1465]: true Mar 12 01:28:18.807908 tar[1464]: linux-amd64/LICENSE Mar 12 01:28:18.807908 tar[1464]: linux-amd64/helm Mar 12 01:28:18.772908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:28:18.808487 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:28:18.808487 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:28:18.808487 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:28:18.772947 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:28:18.832628 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Mar 12 01:28:18.778742 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:28:18.778766 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:28:18.794350 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:28:18.816376 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:28:18.816760 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:28:18.825174 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:28:18.825195 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:28:18.825973 systemd-logind[1452]: New seat seat0. Mar 12 01:28:18.828767 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:28:18.887743 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:28:18.894551 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:28:18.895914 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:28:18.900299 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:28:18.967086 containerd[1466]: time="2026-03-12T01:28:18.966929084Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:28:18.990084 containerd[1466]: time="2026-03-12T01:28:18.990004717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.992571 containerd[1466]: time="2026-03-12T01:28:18.992515725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:28:18.992695 containerd[1466]: time="2026-03-12T01:28:18.992567351Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:28:18.992695 containerd[1466]: time="2026-03-12T01:28:18.992627895Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:28:18.992907 containerd[1466]: time="2026-03-12T01:28:18.992863525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:28:18.992936 containerd[1466]: time="2026-03-12T01:28:18.992914439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993045 containerd[1466]: time="2026-03-12T01:28:18.993002254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993065 containerd[1466]: time="2026-03-12T01:28:18.993046707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993331 containerd[1466]: time="2026-03-12T01:28:18.993294509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993355 containerd[1466]: time="2026-03-12T01:28:18.993329294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993355 containerd[1466]: time="2026-03-12T01:28:18.993344022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993392 containerd[1466]: time="2026-03-12T01:28:18.993353860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993526 containerd[1466]: time="2026-03-12T01:28:18.993476820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.993935 containerd[1466]: time="2026-03-12T01:28:18.993891594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:28:18.994064 containerd[1466]: time="2026-03-12T01:28:18.994026286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:28:18.994064 containerd[1466]: time="2026-03-12T01:28:18.994056472Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:28:18.994191 containerd[1466]: time="2026-03-12T01:28:18.994155256Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:28:18.994326 containerd[1466]: time="2026-03-12T01:28:18.994280309Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:28:19.004650 containerd[1466]: time="2026-03-12T01:28:19.004558904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:28:19.004701 containerd[1466]: time="2026-03-12T01:28:19.004661597Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:28:19.004701 containerd[1466]: time="2026-03-12T01:28:19.004679124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:28:19.004701 containerd[1466]: time="2026-03-12T01:28:19.004693545Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:28:19.004765 containerd[1466]: time="2026-03-12T01:28:19.004706819Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:28:19.004880 containerd[1466]: time="2026-03-12T01:28:19.004837789Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:28:19.005079 containerd[1466]: time="2026-03-12T01:28:19.005044834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:28:19.005207 containerd[1466]: time="2026-03-12T01:28:19.005168631Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:28:19.005230 containerd[1466]: time="2026-03-12T01:28:19.005205591Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:28:19.005230 containerd[1466]: time="2026-03-12T01:28:19.005218578Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:28:19.005270 containerd[1466]: time="2026-03-12T01:28:19.005231759Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005270 containerd[1466]: time="2026-03-12T01:28:19.005243894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005270 containerd[1466]: time="2026-03-12T01:28:19.005255928Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005270 containerd[1466]: time="2026-03-12T01:28:19.005267777Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005393 containerd[1466]: time="2026-03-12T01:28:19.005280825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005393 containerd[1466]: time="2026-03-12T01:28:19.005292479Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005393 containerd[1466]: time="2026-03-12T01:28:19.005304040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005393 containerd[1466]: time="2026-03-12T01:28:19.005343020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:28:19.005393 containerd[1466]: time="2026-03-12T01:28:19.005374251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005393 containerd[1466]: time="2026-03-12T01:28:19.005386684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005397703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005409397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005419893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005431558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005441438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005452569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005463731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005475549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005485287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005502 containerd[1466]: time="2026-03-12T01:28:19.005495546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005506504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005527269Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005552535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005562980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005572737Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005657257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005673594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005683025Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:28:19.005696 containerd[1466]: time="2026-03-12T01:28:19.005692722Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:28:19.005839 containerd[1466]: time="2026-03-12T01:28:19.005701874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.005839 containerd[1466]: time="2026-03-12T01:28:19.005712493Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:28:19.005839 containerd[1466]: time="2026-03-12T01:28:19.005726525Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:28:19.005839 containerd[1466]: time="2026-03-12T01:28:19.005736550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:28:19.006026 containerd[1466]: time="2026-03-12T01:28:19.005956468Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:28:19.006176 containerd[1466]: time="2026-03-12T01:28:19.006030143Z" level=info msg="Connect containerd service" Mar 12 01:28:19.006176 containerd[1466]: time="2026-03-12T01:28:19.006067145Z" level=info msg="using legacy CRI server" Mar 12 01:28:19.006176 containerd[1466]: time="2026-03-12T01:28:19.006074360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:28:19.006414 containerd[1466]: time="2026-03-12T01:28:19.006374012Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:28:19.008162 containerd[1466]: time="2026-03-12T01:28:19.008119809Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:28:19.008706 containerd[1466]: time="2026-03-12T01:28:19.008656744Z" level=info msg="Start subscribing containerd event" Mar 12 01:28:19.008887 containerd[1466]: time="2026-03-12T01:28:19.008714172Z" level=info msg="Start recovering state" Mar 12 01:28:19.008975 containerd[1466]: time="2026-03-12T01:28:19.008941081Z" level=info msg="Start event monitor" Mar 12 01:28:19.009044 containerd[1466]: time="2026-03-12T01:28:19.008836975Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:28:19.009123 containerd[1466]: time="2026-03-12T01:28:19.009089753Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:28:19.011653 containerd[1466]: time="2026-03-12T01:28:19.009188131Z" level=info msg="Start snapshots syncer" Mar 12 01:28:19.011653 containerd[1466]: time="2026-03-12T01:28:19.009223134Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:28:19.011653 containerd[1466]: time="2026-03-12T01:28:19.009274629Z" level=info msg="Start streaming server" Mar 12 01:28:19.011653 containerd[1466]: time="2026-03-12T01:28:19.010593074Z" level=info msg="containerd successfully booted in 0.045036s" Mar 12 01:28:19.010387 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:28:19.173879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:28:19.186912 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:28:19.214394 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:28:19.227890 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:28:19.233392 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:44426.service - OpenSSH per-connection server daemon (10.0.0.1:44426). Mar 12 01:28:19.239297 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:28:19.239886 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:28:19.251263 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:28:19.263325 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:28:19.279203 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:28:19.284816 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:28:19.289486 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:28:19.327645 sshd[1520]: Accepted publickey for core from 10.0.0.1 port 44426 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:19.328498 sshd[1520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:19.337945 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:28:19.351979 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:28:19.358788 systemd-logind[1452]: New session 1 of user core. Mar 12 01:28:19.363794 tar[1464]: linux-amd64/README.md Mar 12 01:28:19.382352 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:28:19.387859 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:28:19.394523 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:28:19.405434 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:28:19.527223 systemd[1534]: Queued start job for default target default.target. Mar 12 01:28:19.537253 systemd[1534]: Created slice app.slice - User Application Slice. Mar 12 01:28:19.537342 systemd[1534]: Reached target paths.target - Paths. Mar 12 01:28:19.537367 systemd[1534]: Reached target timers.target - Timers. Mar 12 01:28:19.539181 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:28:19.553558 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:28:19.553845 systemd[1534]: Reached target sockets.target - Sockets. Mar 12 01:28:19.553892 systemd[1534]: Reached target basic.target - Basic System. Mar 12 01:28:19.553963 systemd[1534]: Reached target default.target - Main User Target. Mar 12 01:28:19.554017 systemd[1534]: Startup finished in 140ms. Mar 12 01:28:19.554089 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:28:19.558310 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:28:19.628201 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:44436.service - OpenSSH per-connection server daemon (10.0.0.1:44436). Mar 12 01:28:19.667126 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 44436 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:19.668888 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:19.673879 systemd-logind[1452]: New session 2 of user core. Mar 12 01:28:19.687058 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:28:19.750567 sshd[1545]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:19.765650 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:44436.service: Deactivated successfully. Mar 12 01:28:19.767267 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:28:19.768780 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:28:19.770003 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:44446.service - OpenSSH per-connection server daemon (10.0.0.1:44446). Mar 12 01:28:19.775203 systemd-logind[1452]: Removed session 2. Mar 12 01:28:19.806586 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 44446 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:19.808357 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:19.813467 systemd-logind[1452]: New session 3 of user core. Mar 12 01:28:19.830858 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:28:19.864994 systemd-networkd[1406]: eth0: Gained IPv6LL Mar 12 01:28:19.869465 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:28:19.873977 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:28:19.886918 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:28:19.891971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:19.896355 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:28:19.905879 sshd[1552]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:19.911727 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:44446.service: Deactivated successfully. Mar 12 01:28:19.914924 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:28:19.917839 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:28:19.923249 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:28:19.927863 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:28:19.928185 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:28:19.932851 systemd-logind[1452]: Removed session 3. Mar 12 01:28:19.934465 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:28:20.705452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:20.710893 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:28:20.716419 systemd[1]: Startup finished in 1.692s (kernel) + 11.033s (initrd) + 7.921s (userspace) = 20.646s. Mar 12 01:28:20.716470 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:28:21.150883 kubelet[1581]: E0312 01:28:21.150572 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:28:21.153756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:28:21.154058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:28:30.056912 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:49858.service - OpenSSH per-connection server daemon (10.0.0.1:49858). Mar 12 01:28:30.119453 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 49858 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:30.122920 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:30.140418 systemd-logind[1452]: New session 4 of user core. Mar 12 01:28:30.158157 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:28:30.230359 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:30.252386 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:49858.service: Deactivated successfully. Mar 12 01:28:30.255545 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:28:30.261952 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:28:30.280460 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:49874.service - OpenSSH per-connection server daemon (10.0.0.1:49874). Mar 12 01:28:30.283427 systemd-logind[1452]: Removed session 4. Mar 12 01:28:30.330269 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 49874 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:30.333497 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:30.347981 systemd-logind[1452]: New session 5 of user core. Mar 12 01:28:30.358400 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:28:30.419133 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:30.434662 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:49874.service: Deactivated successfully. Mar 12 01:28:30.437035 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:28:30.439462 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:28:30.449389 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:49884.service - OpenSSH per-connection server daemon (10.0.0.1:49884). Mar 12 01:28:30.451445 systemd-logind[1452]: Removed session 5. Mar 12 01:28:30.489635 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 49884 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:30.492088 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:30.499517 systemd-logind[1452]: New session 6 of user core. Mar 12 01:28:30.508895 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:28:30.572038 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:30.583743 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:49884.service: Deactivated successfully. Mar 12 01:28:30.586211 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:28:30.588224 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:28:30.600280 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:49892.service - OpenSSH per-connection server daemon (10.0.0.1:49892). Mar 12 01:28:30.601935 systemd-logind[1452]: Removed session 6. Mar 12 01:28:30.637160 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 49892 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:30.639863 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:30.645798 systemd-logind[1452]: New session 7 of user core. Mar 12 01:28:30.655954 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:28:30.740041 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:28:30.740571 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:28:30.771842 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 12 01:28:30.787902 sshd[1615]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:30.805069 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:49892.service: Deactivated successfully. Mar 12 01:28:30.809289 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:28:30.819926 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:28:30.832706 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:49894.service - OpenSSH per-connection server daemon (10.0.0.1:49894). Mar 12 01:28:30.834498 systemd-logind[1452]: Removed session 7. Mar 12 01:28:30.874739 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 49894 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:30.876941 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:30.887171 systemd-logind[1452]: New session 8 of user core. Mar 12 01:28:30.893859 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:28:30.962171 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:28:30.963292 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:28:30.970410 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 12 01:28:30.980325 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:28:30.980952 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:28:31.008100 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:28:31.011836 auditctl[1630]: No rules Mar 12 01:28:31.012350 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:28:31.012900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:28:31.064406 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:28:31.156823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:28:31.181854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:31.278173 augenrules[1651]: No rules Mar 12 01:28:31.286357 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:28:31.289448 sudo[1626]: pam_unix(sudo:session): session closed for user root Mar 12 01:28:31.302157 sshd[1623]: pam_unix(sshd:session): session closed for user core Mar 12 01:28:31.369715 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:49894.service: Deactivated successfully. Mar 12 01:28:31.373973 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:28:31.383459 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:28:31.401213 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:49910.service - OpenSSH per-connection server daemon (10.0.0.1:49910). Mar 12 01:28:31.403244 systemd-logind[1452]: Removed session 8. Mar 12 01:28:32.017300 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 49910 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:28:32.204098 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:28:32.311224 systemd-logind[1452]: New session 9 of user core. Mar 12 01:28:32.354188 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:28:32.414977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:32.474096 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:28:32.474330 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:28:32.475024 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:28:32.906383 kubelet[1666]: E0312 01:28:32.905516 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:28:32.913788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:28:32.914910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:28:32.915795 systemd[1]: kubelet.service: Consumed 1.358s CPU time. Mar 12 01:28:35.078099 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:28:35.078232 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:28:39.066633 dockerd[1694]: time="2026-03-12T01:28:39.066073984Z" level=info msg="Starting up" Mar 12 01:28:40.318543 dockerd[1694]: time="2026-03-12T01:28:40.317550889Z" level=info msg="Loading containers: start." Mar 12 01:28:41.028942 kernel: Initializing XFRM netlink socket Mar 12 01:28:41.513198 systemd-networkd[1406]: docker0: Link UP Mar 12 01:28:41.559674 dockerd[1694]: time="2026-03-12T01:28:41.559434906Z" level=info msg="Loading containers: done." Mar 12 01:28:41.694975 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1606485635-merged.mount: Deactivated successfully. Mar 12 01:28:41.714363 dockerd[1694]: time="2026-03-12T01:28:41.712361312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:28:41.832944 dockerd[1694]: time="2026-03-12T01:28:41.820118244Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:28:41.896738 dockerd[1694]: time="2026-03-12T01:28:41.896170641Z" level=info msg="Daemon has completed initialization" Mar 12 01:28:42.070372 dockerd[1694]: time="2026-03-12T01:28:42.068558598Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:28:42.072090 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:28:43.027189 containerd[1466]: time="2026-03-12T01:28:43.027057234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 12 01:28:43.164388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:28:43.174948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:43.388883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:43.408185 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:28:43.503491 kubelet[1849]: E0312 01:28:43.503394 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:28:43.507960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:28:43.508369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:28:43.750119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950243352.mount: Deactivated successfully. Mar 12 01:28:45.912831 containerd[1466]: time="2026-03-12T01:28:45.912668967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:45.915101 containerd[1466]: time="2026-03-12T01:28:45.915022603Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 12 01:28:45.916871 containerd[1466]: time="2026-03-12T01:28:45.916823120Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:45.921288 containerd[1466]: time="2026-03-12T01:28:45.921177984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:45.923101 containerd[1466]: time="2026-03-12T01:28:45.922988563Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.895882183s" Mar 12 01:28:45.923101 containerd[1466]: time="2026-03-12T01:28:45.923089334Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 12 01:28:45.925137 containerd[1466]: time="2026-03-12T01:28:45.925089943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 12 01:28:49.046273 containerd[1466]: time="2026-03-12T01:28:49.046054334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:49.048265 containerd[1466]: time="2026-03-12T01:28:49.048054387Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 12 01:28:49.049637 containerd[1466]: time="2026-03-12T01:28:49.049566424Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:49.053075 containerd[1466]: time="2026-03-12T01:28:49.053017020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:49.054448 containerd[1466]: time="2026-03-12T01:28:49.054362657Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 3.129224122s" Mar 12 01:28:49.054448 containerd[1466]: time="2026-03-12T01:28:49.054425041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 12 01:28:49.056389 containerd[1466]: time="2026-03-12T01:28:49.056354822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 12 01:28:50.125231 containerd[1466]: time="2026-03-12T01:28:50.125142455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:50.126698 containerd[1466]: time="2026-03-12T01:28:50.126604501Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 12 01:28:50.128162 containerd[1466]: time="2026-03-12T01:28:50.128105870Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:50.132674 containerd[1466]: time="2026-03-12T01:28:50.132615171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:50.134796 containerd[1466]: time="2026-03-12T01:28:50.134690793Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.07829109s" Mar 12 01:28:50.134796 containerd[1466]: time="2026-03-12T01:28:50.134790948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 12 01:28:50.135710 containerd[1466]: time="2026-03-12T01:28:50.135676455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 12 01:28:51.155774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763377877.mount: Deactivated successfully. Mar 12 01:28:51.503334 containerd[1466]: time="2026-03-12T01:28:51.503124848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:51.504759 containerd[1466]: time="2026-03-12T01:28:51.504624350Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 12 01:28:51.506404 containerd[1466]: time="2026-03-12T01:28:51.506290360Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:51.509341 containerd[1466]: time="2026-03-12T01:28:51.509226802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:51.510822 containerd[1466]: time="2026-03-12T01:28:51.510671869Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.374812394s" Mar 12 01:28:51.510822 containerd[1466]: time="2026-03-12T01:28:51.510720705Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 12 01:28:51.511863 containerd[1466]: time="2026-03-12T01:28:51.511779304Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 12 01:28:52.026797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812647790.mount: Deactivated successfully. Mar 12 01:28:53.170310 containerd[1466]: time="2026-03-12T01:28:53.170211286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:53.172239 containerd[1466]: time="2026-03-12T01:28:53.172082839Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 12 01:28:53.173662 containerd[1466]: time="2026-03-12T01:28:53.173525148Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:53.178373 containerd[1466]: time="2026-03-12T01:28:53.178258257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:53.179808 containerd[1466]: time="2026-03-12T01:28:53.179712121Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.667875111s" Mar 12 01:28:53.179808 containerd[1466]: time="2026-03-12T01:28:53.179741143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 12 01:28:53.180677 containerd[1466]: time="2026-03-12T01:28:53.180651667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 01:28:53.604803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 01:28:53.615078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:53.617103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103474873.mount: Deactivated successfully. Mar 12 01:28:53.621254 containerd[1466]: time="2026-03-12T01:28:53.620046867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:53.622859 containerd[1466]: time="2026-03-12T01:28:53.622731976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 12 01:28:53.624285 containerd[1466]: time="2026-03-12T01:28:53.624150115Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:53.629055 containerd[1466]: time="2026-03-12T01:28:53.628969284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:53.630283 containerd[1466]: time="2026-03-12T01:28:53.630189372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 449.339419ms" Mar 12 01:28:53.630283 containerd[1466]: time="2026-03-12T01:28:53.630255074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 12 01:28:53.631094 containerd[1466]: time="2026-03-12T01:28:53.631030832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 12 01:28:53.812308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:53.820850 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:28:53.886407 kubelet[2001]: E0312 01:28:53.886200 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:28:53.890404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:28:53.890751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:28:54.161089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208950527.mount: Deactivated successfully. Mar 12 01:28:55.108813 containerd[1466]: time="2026-03-12T01:28:55.108693575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:55.110107 containerd[1466]: time="2026-03-12T01:28:55.110038313Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 12 01:28:55.111859 containerd[1466]: time="2026-03-12T01:28:55.111797993Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:55.115658 containerd[1466]: time="2026-03-12T01:28:55.115534934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:28:55.117518 containerd[1466]: time="2026-03-12T01:28:55.117409903Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.486307928s" Mar 12 01:28:55.117518 containerd[1466]: time="2026-03-12T01:28:55.117469118Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 12 01:28:56.299369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:56.312112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:56.353899 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-9.scope)... Mar 12 01:28:56.353939 systemd[1]: Reloading... Mar 12 01:28:56.462755 zram_generator::config[2139]: No configuration found. Mar 12 01:28:56.624143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:28:56.726849 systemd[1]: Reloading finished in 371 ms. Mar 12 01:28:56.796774 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:56.801040 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:28:56.801315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:56.819392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:28:57.000976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:28:57.014519 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:28:57.072639 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:28:57.272555 kubelet[2191]: I0312 01:28:57.272373 2191 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 01:28:57.272555 kubelet[2191]: I0312 01:28:57.272435 2191 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:28:57.272555 kubelet[2191]: I0312 01:28:57.272454 2191 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:28:57.272555 kubelet[2191]: I0312 01:28:57.272459 2191 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:28:57.272894 kubelet[2191]: I0312 01:28:57.272739 2191 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 01:28:57.352646 kubelet[2191]: E0312 01:28:57.352530 2191 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:28:57.354081 kubelet[2191]: I0312 01:28:57.353998 2191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:28:57.360256 kubelet[2191]: E0312 01:28:57.360138 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:28:57.360256 kubelet[2191]: I0312 01:28:57.360223 2191 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:28:57.368521 kubelet[2191]: I0312 01:28:57.368442 2191 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:28:57.370392 kubelet[2191]: I0312 01:28:57.370279 2191 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:28:57.370612 kubelet[2191]: I0312 01:28:57.370329 2191 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:28:57.370831 kubelet[2191]: I0312 01:28:57.370567 2191 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 01:28:57.370831 kubelet[2191]: I0312 01:28:57.370633 2191 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 01:28:57.370831 kubelet[2191]: I0312 01:28:57.370765 2191 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:28:57.374017 kubelet[2191]: I0312 01:28:57.373939 2191 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 01:28:57.374333 kubelet[2191]: I0312 01:28:57.374265 2191 kubelet.go:482] "Attempting to sync node with API server" Mar 12 01:28:57.374333 kubelet[2191]: I0312 01:28:57.374305 2191 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:28:57.374399 kubelet[2191]: I0312 01:28:57.374340 2191 kubelet.go:394] "Adding apiserver pod source" Mar 12 01:28:57.374458 kubelet[2191]: I0312 01:28:57.374431 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:28:57.377254 kubelet[2191]: I0312 01:28:57.377184 2191 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:28:57.380229 kubelet[2191]: I0312 01:28:57.380135 2191 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:28:57.380229 kubelet[2191]: I0312 01:28:57.380208 2191 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:28:57.380346 kubelet[2191]: W0312 01:28:57.380309 2191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:28:57.387395 kubelet[2191]: I0312 01:28:57.387339 2191 server.go:1257] "Started kubelet" Mar 12 01:28:57.387491 kubelet[2191]: I0312 01:28:57.387428 2191 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:28:57.387644 kubelet[2191]: I0312 01:28:57.387524 2191 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:28:57.390375 kubelet[2191]: I0312 01:28:57.387874 2191 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:28:57.390375 kubelet[2191]: I0312 01:28:57.388254 2191 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:28:57.390375 kubelet[2191]: I0312 01:28:57.389367 2191 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:28:57.396236 kubelet[2191]: I0312 01:28:57.396216 2191 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 01:28:57.398425 kubelet[2191]: I0312 01:28:57.398374 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:28:57.403198 kubelet[2191]: E0312 01:28:57.398248 2191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf3cc5ea6a937 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:28:57.387280695 +0000 UTC m=+0.366548365,LastTimestamp:2026-03-12 01:28:57.387280695 +0000 UTC m=+0.366548365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:28:57.403198 kubelet[2191]: E0312 01:28:57.402527 2191 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:28:57.403198 kubelet[2191]: I0312 01:28:57.402631 2191 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 01:28:57.403198 kubelet[2191]: I0312 01:28:57.402700 2191 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:28:57.403198 kubelet[2191]: I0312 01:28:57.402743 2191 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:28:57.403198 kubelet[2191]: E0312 01:28:57.403101 2191 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Mar 12 01:28:57.403501 kubelet[2191]: I0312 01:28:57.403466 2191 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:28:57.403921 kubelet[2191]: I0312 01:28:57.403851 2191 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:28:57.404872 kubelet[2191]: E0312 01:28:57.404783 2191 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:28:57.405410 kubelet[2191]: I0312 01:28:57.405378 2191 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:28:57.423476 kubelet[2191]: I0312 01:28:57.423454 2191 cpu_manager.go:225] "Starting" policy="none" Mar 12 01:28:57.423986 kubelet[2191]: I0312 01:28:57.423716 2191 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 01:28:57.423986 kubelet[2191]: I0312 01:28:57.423737 2191 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 01:28:57.426444 kubelet[2191]: I0312 01:28:57.426398 2191 policy_none.go:50] "Start" Mar 12 01:28:57.426531 kubelet[2191]: I0312 01:28:57.426454 2191 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:28:57.426531 kubelet[2191]: I0312 01:28:57.426477 2191 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:28:57.430181 kubelet[2191]: I0312 01:28:57.430129 2191 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:28:57.431136 kubelet[2191]: I0312 01:28:57.431094 2191 policy_none.go:44] "Start" Mar 12 01:28:57.432424 kubelet[2191]: I0312 01:28:57.432389 2191 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:28:57.432424 kubelet[2191]: I0312 01:28:57.432423 2191 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 01:28:57.432527 kubelet[2191]: I0312 01:28:57.432444 2191 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 01:28:57.433655 kubelet[2191]: E0312 01:28:57.433566 2191 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:28:57.438701 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:28:57.460315 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:28:57.464027 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:28:57.482348 kubelet[2191]: E0312 01:28:57.482297 2191 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:28:57.482656 kubelet[2191]: I0312 01:28:57.482566 2191 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 01:28:57.482729 kubelet[2191]: I0312 01:28:57.482638 2191 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:28:57.482965 kubelet[2191]: I0312 01:28:57.482872 2191 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 01:28:57.484381 kubelet[2191]: E0312 01:28:57.484265 2191 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:28:57.484381 kubelet[2191]: E0312 01:28:57.484328 2191 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:28:57.546166 systemd[1]: Created slice kubepods-burstable-podcf1c781efa833f8957362ac4a3147dc9.slice - libcontainer container kubepods-burstable-podcf1c781efa833f8957362ac4a3147dc9.slice. Mar 12 01:28:57.566563 kubelet[2191]: E0312 01:28:57.566461 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:28:57.570264 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 12 01:28:57.584730 kubelet[2191]: I0312 01:28:57.584625 2191 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:28:57.585358 kubelet[2191]: E0312 01:28:57.585162 2191 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 12 01:28:57.585422 kubelet[2191]: E0312 01:28:57.585413 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:28:57.589087 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 12 01:28:57.591194 kubelet[2191]: E0312 01:28:57.591135 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:28:57.603793 kubelet[2191]: I0312 01:28:57.603739 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf1c781efa833f8957362ac4a3147dc9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf1c781efa833f8957362ac4a3147dc9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:28:57.604133 kubelet[2191]: I0312 01:28:57.604055 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf1c781efa833f8957362ac4a3147dc9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf1c781efa833f8957362ac4a3147dc9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:28:57.604133 kubelet[2191]: I0312 01:28:57.604116 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:28:57.604133 kubelet[2191]: E0312 01:28:57.603783 2191 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Mar 12 01:28:57.604344 kubelet[2191]: I0312 01:28:57.604145 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:28:57.604344 kubelet[2191]: I0312 01:28:57.604223 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:28:57.604344 kubelet[2191]: I0312 01:28:57.604293 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:28:57.604344 kubelet[2191]: I0312 01:28:57.604318 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf1c781efa833f8957362ac4a3147dc9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf1c781efa833f8957362ac4a3147dc9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:28:57.604344 kubelet[2191]: I0312 01:28:57.604344 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:28:57.604513 kubelet[2191]: I0312 01:28:57.604370 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:28:57.787350 kubelet[2191]: I0312 01:28:57.787210 2191 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:28:57.787685 kubelet[2191]: E0312 01:28:57.787572 2191 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 12 01:28:57.871342 kubelet[2191]: E0312 01:28:57.871147 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:57.872768 containerd[1466]: time="2026-03-12T01:28:57.872398341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf1c781efa833f8957362ac4a3147dc9,Namespace:kube-system,Attempt:0,}" Mar 12 01:28:57.888979 kubelet[2191]: E0312 01:28:57.888870 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:57.889835 containerd[1466]: time="2026-03-12T01:28:57.889776211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 12 01:28:57.895528 kubelet[2191]: E0312 01:28:57.895486 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:57.896361 containerd[1466]: time="2026-03-12T01:28:57.896216297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 12 01:28:58.004804 kubelet[2191]: E0312 01:28:58.004680 2191 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Mar 12 01:28:58.193208 kubelet[2191]: I0312 01:28:58.191873 2191 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:28:58.193208 kubelet[2191]: E0312 01:28:58.192742 2191 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 12 01:28:58.320904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966536241.mount: Deactivated successfully. Mar 12 01:28:58.331049 containerd[1466]: time="2026-03-12T01:28:58.330908826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:28:58.335893 containerd[1466]: time="2026-03-12T01:28:58.335815018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:28:58.338152 containerd[1466]: time="2026-03-12T01:28:58.338017784Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:28:58.339633 containerd[1466]: time="2026-03-12T01:28:58.339524989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:28:58.340693 containerd[1466]: time="2026-03-12T01:28:58.340629873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:28:58.342687 containerd[1466]: time="2026-03-12T01:28:58.341980176Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:28:58.345951 containerd[1466]: time="2026-03-12T01:28:58.344226956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:28:58.348336 containerd[1466]: time="2026-03-12T01:28:58.348296806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:28:58.352749 containerd[1466]: time="2026-03-12T01:28:58.352676462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.34801ms" Mar 12 01:28:58.356004 containerd[1466]: time="2026-03-12T01:28:58.355706384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.185356ms" Mar 12 01:28:58.361290 containerd[1466]: time="2026-03-12T01:28:58.361195558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.296591ms" Mar 12 01:28:58.508648 containerd[1466]: time="2026-03-12T01:28:58.508377036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:28:58.508648 containerd[1466]: time="2026-03-12T01:28:58.508452703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:28:58.508648 containerd[1466]: time="2026-03-12T01:28:58.508466452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:28:58.508648 containerd[1466]: time="2026-03-12T01:28:58.508539684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:28:58.509973 containerd[1466]: time="2026-03-12T01:28:58.509480408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:28:58.509973 containerd[1466]: time="2026-03-12T01:28:58.509547668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:28:58.509973 containerd[1466]: time="2026-03-12T01:28:58.509562509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:28:58.509973 containerd[1466]: time="2026-03-12T01:28:58.509703223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:28:58.510938 containerd[1466]: time="2026-03-12T01:28:58.510621726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:28:58.510938 containerd[1466]: time="2026-03-12T01:28:58.510681571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:28:58.510938 containerd[1466]: time="2026-03-12T01:28:58.510702134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:28:58.512299 containerd[1466]: time="2026-03-12T01:28:58.511715979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:28:58.557861 systemd[1]: Started cri-containerd-0d2456816d61846574bc8dc57f702b2d6e80004ea946cb4c1667917983ac12b8.scope - libcontainer container 0d2456816d61846574bc8dc57f702b2d6e80004ea946cb4c1667917983ac12b8. Mar 12 01:28:58.560224 systemd[1]: Started cri-containerd-20497b4231e682aab8203c7139284f256fcd5113dabd76820e2aa6c9efaae260.scope - libcontainer container 20497b4231e682aab8203c7139284f256fcd5113dabd76820e2aa6c9efaae260. Mar 12 01:28:58.562937 systemd[1]: Started cri-containerd-84e21c8ba8772d80bdb8ebe07c97cdcc8bfb9e38ef812249a547c3b66053c6bc.scope - libcontainer container 84e21c8ba8772d80bdb8ebe07c97cdcc8bfb9e38ef812249a547c3b66053c6bc. Mar 12 01:28:58.621252 containerd[1466]: time="2026-03-12T01:28:58.621041706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"84e21c8ba8772d80bdb8ebe07c97cdcc8bfb9e38ef812249a547c3b66053c6bc\"" Mar 12 01:28:58.626488 kubelet[2191]: E0312 01:28:58.625879 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:58.628039 containerd[1466]: time="2026-03-12T01:28:58.628012728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf1c781efa833f8957362ac4a3147dc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"20497b4231e682aab8203c7139284f256fcd5113dabd76820e2aa6c9efaae260\"" Mar 12 01:28:58.630264 kubelet[2191]: E0312 01:28:58.630203 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:58.631450 containerd[1466]: time="2026-03-12T01:28:58.630796225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2456816d61846574bc8dc57f702b2d6e80004ea946cb4c1667917983ac12b8\"" Mar 12 01:28:58.632100 kubelet[2191]: E0312 01:28:58.632061 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:58.633682 containerd[1466]: time="2026-03-12T01:28:58.633655679Z" level=info msg="CreateContainer within sandbox \"84e21c8ba8772d80bdb8ebe07c97cdcc8bfb9e38ef812249a547c3b66053c6bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:28:58.638520 containerd[1466]: time="2026-03-12T01:28:58.638492544Z" level=info msg="CreateContainer within sandbox \"0d2456816d61846574bc8dc57f702b2d6e80004ea946cb4c1667917983ac12b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:28:58.641197 containerd[1466]: time="2026-03-12T01:28:58.641138414Z" level=info msg="CreateContainer within sandbox \"20497b4231e682aab8203c7139284f256fcd5113dabd76820e2aa6c9efaae260\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:28:58.655690 containerd[1466]: time="2026-03-12T01:28:58.655620592Z" level=info msg="CreateContainer within sandbox \"84e21c8ba8772d80bdb8ebe07c97cdcc8bfb9e38ef812249a547c3b66053c6bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0683a41f9373173a9f35f6bd6735a89682f82c958c4b199f6279985531d33e84\"" Mar 12 01:28:58.656402 containerd[1466]: time="2026-03-12T01:28:58.656350144Z" level=info msg="StartContainer for \"0683a41f9373173a9f35f6bd6735a89682f82c958c4b199f6279985531d33e84\"" Mar 12 01:28:58.665497 containerd[1466]: time="2026-03-12T01:28:58.665382728Z" level=info msg="CreateContainer within sandbox \"0d2456816d61846574bc8dc57f702b2d6e80004ea946cb4c1667917983ac12b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8313250a4ee03f2c21ee77b0b7765bb733740cc4a4a981250ad0a2a6e6a2c8fd\"" Mar 12 01:28:58.667208 containerd[1466]: time="2026-03-12T01:28:58.667110593Z" level=info msg="StartContainer for \"8313250a4ee03f2c21ee77b0b7765bb733740cc4a4a981250ad0a2a6e6a2c8fd\"" Mar 12 01:28:58.669159 containerd[1466]: time="2026-03-12T01:28:58.669137572Z" level=info msg="CreateContainer within sandbox \"20497b4231e682aab8203c7139284f256fcd5113dabd76820e2aa6c9efaae260\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2cd0aafb5f51e36cf415ceb90c93b819c6fa03d56c9a0ae96e8d0f80291463a0\"" Mar 12 01:28:58.669835 containerd[1466]: time="2026-03-12T01:28:58.669809934Z" level=info msg="StartContainer for \"2cd0aafb5f51e36cf415ceb90c93b819c6fa03d56c9a0ae96e8d0f80291463a0\"" Mar 12 01:28:58.692939 systemd[1]: Started cri-containerd-0683a41f9373173a9f35f6bd6735a89682f82c958c4b199f6279985531d33e84.scope - libcontainer container 0683a41f9373173a9f35f6bd6735a89682f82c958c4b199f6279985531d33e84. Mar 12 01:28:58.706020 systemd[1]: Started cri-containerd-2cd0aafb5f51e36cf415ceb90c93b819c6fa03d56c9a0ae96e8d0f80291463a0.scope - libcontainer container 2cd0aafb5f51e36cf415ceb90c93b819c6fa03d56c9a0ae96e8d0f80291463a0. Mar 12 01:28:58.710835 systemd[1]: Started cri-containerd-8313250a4ee03f2c21ee77b0b7765bb733740cc4a4a981250ad0a2a6e6a2c8fd.scope - libcontainer container 8313250a4ee03f2c21ee77b0b7765bb733740cc4a4a981250ad0a2a6e6a2c8fd. Mar 12 01:28:58.778654 containerd[1466]: time="2026-03-12T01:28:58.778431044Z" level=info msg="StartContainer for \"0683a41f9373173a9f35f6bd6735a89682f82c958c4b199f6279985531d33e84\" returns successfully" Mar 12 01:28:58.791639 containerd[1466]: time="2026-03-12T01:28:58.788948294Z" level=info msg="StartContainer for \"2cd0aafb5f51e36cf415ceb90c93b819c6fa03d56c9a0ae96e8d0f80291463a0\" returns successfully" Mar 12 01:28:58.801685 containerd[1466]: time="2026-03-12T01:28:58.801623207Z" level=info msg="StartContainer for \"8313250a4ee03f2c21ee77b0b7765bb733740cc4a4a981250ad0a2a6e6a2c8fd\" returns successfully" Mar 12 01:28:58.810641 kubelet[2191]: E0312 01:28:58.807831 2191 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Mar 12 01:28:58.996540 kubelet[2191]: I0312 01:28:58.996505 2191 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:28:59.454294 kubelet[2191]: E0312 01:28:59.454182 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:28:59.455002 kubelet[2191]: E0312 01:28:59.454425 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:59.461436 kubelet[2191]: E0312 01:28:59.458952 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:28:59.461436 kubelet[2191]: E0312 01:28:59.459081 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:28:59.461436 kubelet[2191]: E0312 01:28:59.461098 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:28:59.461436 kubelet[2191]: E0312 01:28:59.461243 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:00.465302 kubelet[2191]: E0312 01:29:00.465257 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:29:00.466030 kubelet[2191]: E0312 01:29:00.465389 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:00.466295 kubelet[2191]: E0312 01:29:00.466272 2191 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:29:00.466399 kubelet[2191]: E0312 01:29:00.466356 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:00.507475 kubelet[2191]: E0312 01:29:00.507396 2191 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:29:00.601657 kubelet[2191]: I0312 01:29:00.600090 2191 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 12 01:29:00.603403 kubelet[2191]: I0312 01:29:00.603342 2191 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:29:00.631217 kubelet[2191]: E0312 01:29:00.631177 2191 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:29:00.631662 kubelet[2191]: I0312 01:29:00.631460 2191 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:00.635970 kubelet[2191]: E0312 01:29:00.635922 2191 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:00.635970 kubelet[2191]: I0312 01:29:00.635955 2191 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:00.641343 kubelet[2191]: E0312 01:29:00.641287 2191 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:01.377138 kubelet[2191]: I0312 01:29:01.377018 2191 apiserver.go:52] "Watching apiserver" Mar 12 01:29:01.403503 kubelet[2191]: I0312 01:29:01.403431 2191 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:29:01.466451 kubelet[2191]: I0312 01:29:01.466362 2191 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:29:01.467438 kubelet[2191]: I0312 01:29:01.466980 2191 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:01.469983 kubelet[2191]: E0312 01:29:01.469407 2191 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:29:01.469983 kubelet[2191]: E0312 01:29:01.469642 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:01.469983 kubelet[2191]: E0312 01:29:01.469810 2191 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:01.469983 kubelet[2191]: E0312 01:29:01.469929 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:01.608385 kubelet[2191]: I0312 01:29:01.608258 2191 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:01.619501 kubelet[2191]: E0312 01:29:01.619318 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:02.468962 kubelet[2191]: E0312 01:29:02.468858 2191 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:02.973284 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-9.scope)... Mar 12 01:29:02.973323 systemd[1]: Reloading... Mar 12 01:29:03.080671 zram_generator::config[2527]: No configuration found. Mar 12 01:29:03.235052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:29:03.342195 systemd[1]: Reloading finished in 368 ms. Mar 12 01:29:03.403799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:29:03.417726 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:29:03.418211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:29:03.418320 systemd[1]: kubelet.service: Consumed 1.102s CPU time, 128.4M memory peak, 0B memory swap peak. Mar 12 01:29:03.431991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:29:03.624297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:29:03.633011 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:29:03.707256 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:29:03.718911 kubelet[2569]: I0312 01:29:03.718798 2569 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 01:29:03.718911 kubelet[2569]: I0312 01:29:03.718879 2569 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:29:03.718911 kubelet[2569]: I0312 01:29:03.718906 2569 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:29:03.718911 kubelet[2569]: I0312 01:29:03.718916 2569 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:29:03.719349 kubelet[2569]: I0312 01:29:03.719281 2569 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 01:29:03.721082 kubelet[2569]: I0312 01:29:03.720997 2569 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:29:03.724028 kubelet[2569]: I0312 01:29:03.723930 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:29:03.728519 kubelet[2569]: E0312 01:29:03.728465 2569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:29:03.728759 kubelet[2569]: I0312 01:29:03.728625 2569 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:29:03.738432 kubelet[2569]: I0312 01:29:03.738365 2569 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:29:03.739130 kubelet[2569]: I0312 01:29:03.739044 2569 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:29:03.739382 kubelet[2569]: I0312 01:29:03.739106 2569 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:29:03.739382 kubelet[2569]: I0312 01:29:03.739365 2569 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 01:29:03.739382 kubelet[2569]: I0312 01:29:03.739377 2569 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 01:29:03.739530 kubelet[2569]: I0312 01:29:03.739409 2569 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:29:03.739823 kubelet[2569]: I0312 01:29:03.739758 2569 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 01:29:03.740143 kubelet[2569]: I0312 01:29:03.740068 2569 kubelet.go:482] "Attempting to sync node with API server" Mar 12 01:29:03.740477 kubelet[2569]: I0312 01:29:03.740150 2569 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:29:03.740533 kubelet[2569]: I0312 01:29:03.740518 2569 kubelet.go:394] "Adding apiserver pod source" Mar 12 01:29:03.740568 kubelet[2569]: I0312 01:29:03.740537 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:29:03.747407 kubelet[2569]: I0312 01:29:03.747241 2569 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:29:03.756330 kubelet[2569]: I0312 01:29:03.755129 2569 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:29:03.756330 kubelet[2569]: I0312 01:29:03.756290 2569 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:29:03.764251 kubelet[2569]: I0312 01:29:03.764063 2569 server.go:1257] "Started kubelet" Mar 12 01:29:03.764723 kubelet[2569]: I0312 01:29:03.764571 2569 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:29:03.764774 kubelet[2569]: I0312 01:29:03.764730 2569 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:29:03.765657 kubelet[2569]: I0312 01:29:03.765554 2569 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:29:03.765883 kubelet[2569]: I0312 01:29:03.765816 2569 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:29:03.766837 kubelet[2569]: I0312 01:29:03.766779 2569 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 01:29:03.767332 kubelet[2569]: I0312 01:29:03.767310 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:29:03.768955 kubelet[2569]: I0312 01:29:03.768701 2569 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:29:03.770248 kubelet[2569]: I0312 01:29:03.770228 2569 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 01:29:03.774549 kubelet[2569]: I0312 01:29:03.774410 2569 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:29:03.775219 kubelet[2569]: I0312 01:29:03.775013 2569 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:29:03.776119 kubelet[2569]: I0312 01:29:03.776027 2569 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:29:03.776297 kubelet[2569]: I0312 01:29:03.776243 2569 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:29:03.782002 kubelet[2569]: I0312 01:29:03.781938 2569 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:29:03.785427 kubelet[2569]: E0312 01:29:03.785355 2569 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:29:03.810449 kubelet[2569]: I0312 01:29:03.810362 2569 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:29:03.812819 kubelet[2569]: I0312 01:29:03.812795 2569 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:29:03.812955 kubelet[2569]: I0312 01:29:03.812940 2569 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 01:29:03.813076 kubelet[2569]: I0312 01:29:03.813058 2569 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 01:29:03.813568 kubelet[2569]: E0312 01:29:03.813267 2569 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:29:03.840119 kubelet[2569]: I0312 01:29:03.840092 2569 cpu_manager.go:225] "Starting" policy="none" Mar 12 01:29:03.840374 kubelet[2569]: I0312 01:29:03.840275 2569 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 01:29:03.840454 kubelet[2569]: I0312 01:29:03.840398 2569 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 01:29:03.840717 kubelet[2569]: I0312 01:29:03.840666 2569 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 12 01:29:03.840717 kubelet[2569]: I0312 01:29:03.840691 2569 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 12 01:29:03.840717 kubelet[2569]: I0312 01:29:03.840708 2569 policy_none.go:50] "Start" Mar 12 01:29:03.840717 kubelet[2569]: I0312 01:29:03.840716 2569 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:29:03.840884 kubelet[2569]: I0312 01:29:03.840727 2569 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:29:03.840884 kubelet[2569]: I0312 01:29:03.840851 2569 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 01:29:03.840884 kubelet[2569]: I0312 01:29:03.840862 2569 policy_none.go:44] "Start" Mar 12 01:29:03.847333 kubelet[2569]: E0312 01:29:03.847268 2569 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:29:03.847548 kubelet[2569]: I0312 01:29:03.847480 2569 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 01:29:03.847690 kubelet[2569]: I0312 01:29:03.847537 2569 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:29:03.848473 kubelet[2569]: I0312 01:29:03.848226 2569 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 01:29:03.850281 kubelet[2569]: E0312 01:29:03.850256 2569 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:29:03.915570 kubelet[2569]: I0312 01:29:03.915451 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:03.917236 kubelet[2569]: I0312 01:29:03.915566 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.917442 kubelet[2569]: I0312 01:29:03.916115 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:29:03.928363 kubelet[2569]: E0312 01:29:03.928277 2569 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.966501 kubelet[2569]: I0312 01:29:03.966124 2569 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:29:03.967166 sudo[2611]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 01:29:03.967812 sudo[2611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 01:29:03.977252 kubelet[2569]: I0312 01:29:03.977177 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf1c781efa833f8957362ac4a3147dc9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf1c781efa833f8957362ac4a3147dc9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:03.977860 kubelet[2569]: I0312 01:29:03.977431 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf1c781efa833f8957362ac4a3147dc9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf1c781efa833f8957362ac4a3147dc9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:03.977860 kubelet[2569]: I0312 01:29:03.977455 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.977860 kubelet[2569]: I0312 01:29:03.977491 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.977860 kubelet[2569]: I0312 01:29:03.977515 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.977860 kubelet[2569]: I0312 01:29:03.977560 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:29:03.978045 kubelet[2569]: I0312 01:29:03.977641 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.978045 kubelet[2569]: I0312 01:29:03.977688 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:29:03.978045 kubelet[2569]: I0312 01:29:03.977715 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf1c781efa833f8957362ac4a3147dc9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf1c781efa833f8957362ac4a3147dc9\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:03.985320 kubelet[2569]: I0312 01:29:03.985024 2569 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 12 01:29:03.985664 kubelet[2569]: I0312 01:29:03.985500 2569 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 12 01:29:04.071329 update_engine[1455]: I20260312 01:29:04.071009 1455 update_attempter.cc:509] Updating boot flags... Mar 12 01:29:04.111707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2621) Mar 12 01:29:04.199781 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2621) Mar 12 01:29:04.228156 kubelet[2569]: E0312 01:29:04.228125 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:04.230364 kubelet[2569]: E0312 01:29:04.228391 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:04.232794 kubelet[2569]: E0312 01:29:04.228491 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:04.739951 sudo[2611]: pam_unix(sudo:session): session closed for user root Mar 12 01:29:04.741978 kubelet[2569]: I0312 01:29:04.741923 2569 apiserver.go:52] "Watching apiserver" Mar 12 01:29:04.775785 kubelet[2569]: I0312 01:29:04.775617 2569 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:29:04.829519 kubelet[2569]: I0312 01:29:04.829378 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:04.829890 kubelet[2569]: E0312 01:29:04.829736 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:04.830176 kubelet[2569]: E0312 01:29:04.830098 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:04.840889 kubelet[2569]: E0312 01:29:04.840798 2569 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:29:04.841159 kubelet[2569]: E0312 01:29:04.841045 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:04.870742 kubelet[2569]: I0312 01:29:04.870569 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8705529269999999 podStartE2EDuration="1.870552927s" podCreationTimestamp="2026-03-12 01:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:29:04.860334394 +0000 UTC m=+1.220317609" watchObservedRunningTime="2026-03-12 01:29:04.870552927 +0000 UTC m=+1.230536122" Mar 12 01:29:04.870936 kubelet[2569]: I0312 01:29:04.870771 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.870765128 podStartE2EDuration="3.870765128s" podCreationTimestamp="2026-03-12 01:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:29:04.870754618 +0000 UTC m=+1.230737823" watchObservedRunningTime="2026-03-12 01:29:04.870765128 +0000 UTC m=+1.230748323" Mar 12 01:29:04.993473 kubelet[2569]: I0312 01:29:04.992971 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9929559559999999 podStartE2EDuration="1.992955956s" podCreationTimestamp="2026-03-12 01:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:29:04.88088313 +0000 UTC m=+1.240866335" watchObservedRunningTime="2026-03-12 01:29:04.992955956 +0000 UTC m=+1.352939152" Mar 12 01:29:05.833328 kubelet[2569]: E0312 01:29:05.833237 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:05.834092 kubelet[2569]: E0312 01:29:05.833916 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:06.289307 sudo[1668]: pam_unix(sudo:session): session closed for user root Mar 12 01:29:06.291551 sshd[1659]: pam_unix(sshd:session): session closed for user core Mar 12 01:29:06.295367 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:49910.service: Deactivated successfully. Mar 12 01:29:06.297843 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:29:06.298148 systemd[1]: session-9.scope: Consumed 6.476s CPU time, 162.0M memory peak, 0B memory swap peak. Mar 12 01:29:06.300026 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:29:06.301399 systemd-logind[1452]: Removed session 9. Mar 12 01:29:06.835180 kubelet[2569]: E0312 01:29:06.835118 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:07.837141 kubelet[2569]: E0312 01:29:07.837043 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:08.864321 kubelet[2569]: I0312 01:29:08.864179 2569 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:29:08.864982 containerd[1466]: time="2026-03-12T01:29:08.864724885Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:29:08.865343 kubelet[2569]: I0312 01:29:08.865039 2569 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:29:08.965710 kubelet[2569]: E0312 01:29:08.965661 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:09.804851 systemd[1]: Created slice kubepods-besteffort-podd5dd34d5_54bc_4667_99ae_be6a0d402369.slice - libcontainer container kubepods-besteffort-podd5dd34d5_54bc_4667_99ae_be6a0d402369.slice. Mar 12 01:29:09.818694 kubelet[2569]: I0312 01:29:09.818231 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-net\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.818694 kubelet[2569]: I0312 01:29:09.818281 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5wn\" (UniqueName: \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-kube-api-access-mf5wn\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.818694 kubelet[2569]: I0312 01:29:09.818315 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5dd34d5-54bc-4667-99ae-be6a0d402369-kube-proxy\") pod \"kube-proxy-45nsn\" (UID: \"d5dd34d5-54bc-4667-99ae-be6a0d402369\") " pod="kube-system/kube-proxy-45nsn" Mar 12 01:29:09.818694 kubelet[2569]: I0312 01:29:09.818345 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5dd34d5-54bc-4667-99ae-be6a0d402369-xtables-lock\") pod \"kube-proxy-45nsn\" (UID: \"d5dd34d5-54bc-4667-99ae-be6a0d402369\") " pod="kube-system/kube-proxy-45nsn" Mar 12 01:29:09.818694 kubelet[2569]: I0312 01:29:09.818374 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d011ef8-9c31-4a72-be18-b0f8003c6132-clustermesh-secrets\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.819000 kubelet[2569]: I0312 01:29:09.818402 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-config-path\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.819000 kubelet[2569]: I0312 01:29:09.818429 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-kernel\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.819000 kubelet[2569]: I0312 01:29:09.818460 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-hubble-tls\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.819000 kubelet[2569]: I0312 01:29:09.818488 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5dd34d5-54bc-4667-99ae-be6a0d402369-lib-modules\") pod \"kube-proxy-45nsn\" (UID: \"d5dd34d5-54bc-4667-99ae-be6a0d402369\") " pod="kube-system/kube-proxy-45nsn" Mar 12 01:29:09.819000 kubelet[2569]: I0312 01:29:09.818512 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-cgroup\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.819202 kubelet[2569]: I0312 01:29:09.818533 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2lgp\" (UniqueName: \"kubernetes.io/projected/d5dd34d5-54bc-4667-99ae-be6a0d402369-kube-api-access-p2lgp\") pod \"kube-proxy-45nsn\" (UID: \"d5dd34d5-54bc-4667-99ae-be6a0d402369\") " pod="kube-system/kube-proxy-45nsn" Mar 12 01:29:09.819202 kubelet[2569]: I0312 01:29:09.818559 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-bpf-maps\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.819742 kubelet[2569]: I0312 01:29:09.819688 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cni-path\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.820304 kubelet[2569]: I0312 01:29:09.820272 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-lib-modules\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.820736 kubelet[2569]: I0312 01:29:09.820328 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-xtables-lock\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.820736 kubelet[2569]: I0312 01:29:09.820384 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-run\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.820736 kubelet[2569]: I0312 01:29:09.820423 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-hostproc\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.820736 kubelet[2569]: I0312 01:29:09.820447 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-etc-cni-netd\") pod \"cilium-6m4jd\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " pod="kube-system/cilium-6m4jd" Mar 12 01:29:09.827100 systemd[1]: Created slice kubepods-burstable-pod4d011ef8_9c31_4a72_be18_b0f8003c6132.slice - libcontainer container kubepods-burstable-pod4d011ef8_9c31_4a72_be18_b0f8003c6132.slice. Mar 12 01:29:10.093378 systemd[1]: Created slice kubepods-besteffort-podcccff58b_6329_4f0d_a95b_6c7e986deb28.slice - libcontainer container kubepods-besteffort-podcccff58b_6329_4f0d_a95b_6c7e986deb28.slice. Mar 12 01:29:10.122830 kubelet[2569]: E0312 01:29:10.122763 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:10.123910 kubelet[2569]: I0312 01:29:10.123272 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cccff58b-6329-4f0d-a95b-6c7e986deb28-cilium-config-path\") pod \"cilium-operator-78cf5644cb-p9cp6\" (UID: \"cccff58b-6329-4f0d-a95b-6c7e986deb28\") " pod="kube-system/cilium-operator-78cf5644cb-p9cp6" Mar 12 01:29:10.123910 kubelet[2569]: I0312 01:29:10.123325 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srj8v\" (UniqueName: \"kubernetes.io/projected/cccff58b-6329-4f0d-a95b-6c7e986deb28-kube-api-access-srj8v\") pod \"cilium-operator-78cf5644cb-p9cp6\" (UID: \"cccff58b-6329-4f0d-a95b-6c7e986deb28\") " pod="kube-system/cilium-operator-78cf5644cb-p9cp6" Mar 12 01:29:10.125631 containerd[1466]: time="2026-03-12T01:29:10.124456767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45nsn,Uid:d5dd34d5-54bc-4667-99ae-be6a0d402369,Namespace:kube-system,Attempt:0,}" Mar 12 01:29:10.141402 kubelet[2569]: E0312 01:29:10.140320 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:10.141972 containerd[1466]: time="2026-03-12T01:29:10.141261161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6m4jd,Uid:4d011ef8-9c31-4a72-be18-b0f8003c6132,Namespace:kube-system,Attempt:0,}" Mar 12 01:29:10.200278 containerd[1466]: time="2026-03-12T01:29:10.200064465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:29:10.200278 containerd[1466]: time="2026-03-12T01:29:10.200246858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:29:10.200458 containerd[1466]: time="2026-03-12T01:29:10.200290484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:10.200458 containerd[1466]: time="2026-03-12T01:29:10.200428909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:10.205682 containerd[1466]: time="2026-03-12T01:29:10.204355189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:29:10.205682 containerd[1466]: time="2026-03-12T01:29:10.204783610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:29:10.205682 containerd[1466]: time="2026-03-12T01:29:10.204827548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:10.205682 containerd[1466]: time="2026-03-12T01:29:10.204944891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:10.234015 systemd[1]: Started cri-containerd-5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57.scope - libcontainer container 5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57. Mar 12 01:29:10.242222 systemd[1]: Started cri-containerd-3ba275b091023579d3b1f80a74d65c11683af839b154605320695e612bda1b04.scope - libcontainer container 3ba275b091023579d3b1f80a74d65c11683af839b154605320695e612bda1b04. Mar 12 01:29:10.285522 containerd[1466]: time="2026-03-12T01:29:10.285440317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6m4jd,Uid:4d011ef8-9c31-4a72-be18-b0f8003c6132,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\"" Mar 12 01:29:10.288180 kubelet[2569]: E0312 01:29:10.288074 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:10.291561 containerd[1466]: time="2026-03-12T01:29:10.291276259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 01:29:10.308453 containerd[1466]: time="2026-03-12T01:29:10.308319555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45nsn,Uid:d5dd34d5-54bc-4667-99ae-be6a0d402369,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ba275b091023579d3b1f80a74d65c11683af839b154605320695e612bda1b04\"" Mar 12 01:29:10.310058 kubelet[2569]: E0312 01:29:10.309939 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:10.318844 containerd[1466]: time="2026-03-12T01:29:10.318791546Z" level=info msg="CreateContainer within sandbox \"3ba275b091023579d3b1f80a74d65c11683af839b154605320695e612bda1b04\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:29:10.350249 containerd[1466]: time="2026-03-12T01:29:10.348904033Z" level=info msg="CreateContainer within sandbox \"3ba275b091023579d3b1f80a74d65c11683af839b154605320695e612bda1b04\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"946f40de196824145997995bf4c35906eb579e7871299e9eb7bd89a70e5551bd\"" Mar 12 01:29:10.350354 containerd[1466]: time="2026-03-12T01:29:10.350318658Z" level=info msg="StartContainer for \"946f40de196824145997995bf4c35906eb579e7871299e9eb7bd89a70e5551bd\"" Mar 12 01:29:10.400047 systemd[1]: Started cri-containerd-946f40de196824145997995bf4c35906eb579e7871299e9eb7bd89a70e5551bd.scope - libcontainer container 946f40de196824145997995bf4c35906eb579e7871299e9eb7bd89a70e5551bd. Mar 12 01:29:10.402555 kubelet[2569]: E0312 01:29:10.402077 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:10.404693 containerd[1466]: time="2026-03-12T01:29:10.403870588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-p9cp6,Uid:cccff58b-6329-4f0d-a95b-6c7e986deb28,Namespace:kube-system,Attempt:0,}" Mar 12 01:29:10.443191 containerd[1466]: time="2026-03-12T01:29:10.441676508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:29:10.443191 containerd[1466]: time="2026-03-12T01:29:10.441741387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:29:10.443191 containerd[1466]: time="2026-03-12T01:29:10.441762469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:10.443191 containerd[1466]: time="2026-03-12T01:29:10.441904532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:10.467932 containerd[1466]: time="2026-03-12T01:29:10.467812127Z" level=info msg="StartContainer for \"946f40de196824145997995bf4c35906eb579e7871299e9eb7bd89a70e5551bd\" returns successfully" Mar 12 01:29:10.486102 systemd[1]: Started cri-containerd-9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6.scope - libcontainer container 9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6. Mar 12 01:29:10.551948 containerd[1466]: time="2026-03-12T01:29:10.551853718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-p9cp6,Uid:cccff58b-6329-4f0d-a95b-6c7e986deb28,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6\"" Mar 12 01:29:10.554684 kubelet[2569]: E0312 01:29:10.554651 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:10.847242 kubelet[2569]: E0312 01:29:10.847017 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:13.832129 kubelet[2569]: I0312 01:29:13.831722 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-45nsn" podStartSLOduration=4.831705431 podStartE2EDuration="4.831705431s" podCreationTimestamp="2026-03-12 01:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:29:10.869335042 +0000 UTC m=+7.229318236" watchObservedRunningTime="2026-03-12 01:29:13.831705431 +0000 UTC m=+10.191688626" Mar 12 01:29:13.959892 kubelet[2569]: E0312 01:29:13.959715 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:17.783057 kubelet[2569]: E0312 01:29:17.782741 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:19.051306 kubelet[2569]: E0312 01:29:19.051029 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:23.997837 kubelet[2569]: E0312 01:29:23.995888 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:33.336911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952326405.mount: Deactivated successfully. Mar 12 01:29:37.389382 containerd[1466]: time="2026-03-12T01:29:37.389065344Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:29:37.391120 containerd[1466]: time="2026-03-12T01:29:37.390508387Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 12 01:29:37.393756 containerd[1466]: time="2026-03-12T01:29:37.393132045Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:29:37.398626 containerd[1466]: time="2026-03-12T01:29:37.396988529Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 27.105670277s" Mar 12 01:29:37.398626 containerd[1466]: time="2026-03-12T01:29:37.397026722Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 12 01:29:37.404294 containerd[1466]: time="2026-03-12T01:29:37.404050460Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 01:29:37.422246 containerd[1466]: time="2026-03-12T01:29:37.421972652Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 01:29:37.475819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3403232349.mount: Deactivated successfully. Mar 12 01:29:37.483658 containerd[1466]: time="2026-03-12T01:29:37.483517872Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\"" Mar 12 01:29:37.484753 containerd[1466]: time="2026-03-12T01:29:37.484713052Z" level=info msg="StartContainer for \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\"" Mar 12 01:29:37.589967 systemd[1]: Started cri-containerd-56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1.scope - libcontainer container 56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1. Mar 12 01:29:37.695005 containerd[1466]: time="2026-03-12T01:29:37.693200166Z" level=info msg="StartContainer for \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\" returns successfully" Mar 12 01:29:37.743683 systemd[1]: cri-containerd-56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1.scope: Deactivated successfully. Mar 12 01:29:37.823411 containerd[1466]: time="2026-03-12T01:29:37.823009156Z" level=info msg="shim disconnected" id=56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1 namespace=k8s.io Mar 12 01:29:37.823411 containerd[1466]: time="2026-03-12T01:29:37.823117113Z" level=warning msg="cleaning up after shim disconnected" id=56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1 namespace=k8s.io Mar 12 01:29:37.823411 containerd[1466]: time="2026-03-12T01:29:37.823130869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:29:38.430990 kubelet[2569]: E0312 01:29:38.430911 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:38.442106 containerd[1466]: time="2026-03-12T01:29:38.442047055Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 01:29:38.468786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1-rootfs.mount: Deactivated successfully. Mar 12 01:29:38.533173 containerd[1466]: time="2026-03-12T01:29:38.533036955Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\"" Mar 12 01:29:38.534143 containerd[1466]: time="2026-03-12T01:29:38.534030752Z" level=info msg="StartContainer for \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\"" Mar 12 01:29:38.600853 systemd[1]: Started cri-containerd-e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f.scope - libcontainer container e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f. Mar 12 01:29:38.691251 containerd[1466]: time="2026-03-12T01:29:38.690200386Z" level=info msg="StartContainer for \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\" returns successfully" Mar 12 01:29:38.716273 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:29:38.716644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:29:38.716723 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:29:38.725424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:29:38.726119 systemd[1]: cri-containerd-e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f.scope: Deactivated successfully. Mar 12 01:29:38.801366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:29:38.846419 containerd[1466]: time="2026-03-12T01:29:38.846188996Z" level=info msg="shim disconnected" id=e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f namespace=k8s.io Mar 12 01:29:38.846419 containerd[1466]: time="2026-03-12T01:29:38.846258569Z" level=warning msg="cleaning up after shim disconnected" id=e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f namespace=k8s.io Mar 12 01:29:38.846419 containerd[1466]: time="2026-03-12T01:29:38.846274269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:29:39.305706 containerd[1466]: time="2026-03-12T01:29:39.304703331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:29:39.309118 containerd[1466]: time="2026-03-12T01:29:39.309007427Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 12 01:29:39.312102 containerd[1466]: time="2026-03-12T01:29:39.312018927Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:29:39.315783 containerd[1466]: time="2026-03-12T01:29:39.315457946Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.911249904s" Mar 12 01:29:39.315783 containerd[1466]: time="2026-03-12T01:29:39.315529362Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 12 01:29:39.344327 containerd[1466]: time="2026-03-12T01:29:39.344178692Z" level=info msg="CreateContainer within sandbox \"9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 01:29:39.418419 containerd[1466]: time="2026-03-12T01:29:39.418013641Z" level=info msg="CreateContainer within sandbox \"9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\"" Mar 12 01:29:39.419555 containerd[1466]: time="2026-03-12T01:29:39.419412332Z" level=info msg="StartContainer for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\"" Mar 12 01:29:39.442776 kubelet[2569]: E0312 01:29:39.442463 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:39.476345 containerd[1466]: time="2026-03-12T01:29:39.475538473Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 01:29:39.477817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f-rootfs.mount: Deactivated successfully. Mar 12 01:29:39.499509 systemd[1]: run-containerd-runc-k8s.io-90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95-runc.gZDSgl.mount: Deactivated successfully. Mar 12 01:29:39.521026 systemd[1]: Started cri-containerd-90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95.scope - libcontainer container 90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95. Mar 12 01:29:39.549951 containerd[1466]: time="2026-03-12T01:29:39.549826393Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\"" Mar 12 01:29:39.551500 containerd[1466]: time="2026-03-12T01:29:39.551422014Z" level=info msg="StartContainer for \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\"" Mar 12 01:29:39.614150 containerd[1466]: time="2026-03-12T01:29:39.612405806Z" level=info msg="StartContainer for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" returns successfully" Mar 12 01:29:39.622156 systemd[1]: Started cri-containerd-c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834.scope - libcontainer container c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834. Mar 12 01:29:39.721719 systemd[1]: cri-containerd-c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834.scope: Deactivated successfully. Mar 12 01:29:39.738792 containerd[1466]: time="2026-03-12T01:29:39.733443306Z" level=info msg="StartContainer for \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\" returns successfully" Mar 12 01:29:39.832036 containerd[1466]: time="2026-03-12T01:29:39.831924746Z" level=info msg="shim disconnected" id=c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834 namespace=k8s.io Mar 12 01:29:39.832036 containerd[1466]: time="2026-03-12T01:29:39.832009598Z" level=warning msg="cleaning up after shim disconnected" id=c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834 namespace=k8s.io Mar 12 01:29:39.832036 containerd[1466]: time="2026-03-12T01:29:39.832023024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:29:40.466161 kubelet[2569]: E0312 01:29:40.465866 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:40.472940 kubelet[2569]: E0312 01:29:40.468886 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:40.483900 containerd[1466]: time="2026-03-12T01:29:40.483828586Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 01:29:40.524022 containerd[1466]: time="2026-03-12T01:29:40.523972807Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\"" Mar 12 01:29:40.532071 containerd[1466]: time="2026-03-12T01:29:40.531991623Z" level=info msg="StartContainer for \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\"" Mar 12 01:29:40.777739 kubelet[2569]: I0312 01:29:40.777256 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-p9cp6" podStartSLOduration=2.016551152 podStartE2EDuration="30.777239427s" podCreationTimestamp="2026-03-12 01:29:10 +0000 UTC" firstStartedPulling="2026-03-12 01:29:10.556386487 +0000 UTC m=+6.916369692" lastFinishedPulling="2026-03-12 01:29:39.317074772 +0000 UTC m=+35.677057967" observedRunningTime="2026-03-12 01:29:40.510098886 +0000 UTC m=+36.870082081" watchObservedRunningTime="2026-03-12 01:29:40.777239427 +0000 UTC m=+37.137222622" Mar 12 01:29:40.806877 systemd[1]: Started cri-containerd-df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370.scope - libcontainer container df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370. Mar 12 01:29:40.850537 systemd[1]: cri-containerd-df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370.scope: Deactivated successfully. Mar 12 01:29:40.852630 containerd[1466]: time="2026-03-12T01:29:40.852512526Z" level=info msg="StartContainer for \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\" returns successfully" Mar 12 01:29:40.909752 containerd[1466]: time="2026-03-12T01:29:40.909470005Z" level=info msg="shim disconnected" id=df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370 namespace=k8s.io Mar 12 01:29:40.909752 containerd[1466]: time="2026-03-12T01:29:40.909542323Z" level=warning msg="cleaning up after shim disconnected" id=df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370 namespace=k8s.io Mar 12 01:29:40.909752 containerd[1466]: time="2026-03-12T01:29:40.909556560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:29:41.475021 systemd[1]: run-containerd-runc-k8s.io-df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370-runc.JmBjyz.mount: Deactivated successfully. Mar 12 01:29:41.475249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370-rootfs.mount: Deactivated successfully. Mar 12 01:29:41.483627 kubelet[2569]: E0312 01:29:41.483335 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:41.489812 kubelet[2569]: E0312 01:29:41.489298 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:41.516308 containerd[1466]: time="2026-03-12T01:29:41.515975896Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 01:29:41.635551 containerd[1466]: time="2026-03-12T01:29:41.635233009Z" level=info msg="CreateContainer within sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\"" Mar 12 01:29:41.637045 containerd[1466]: time="2026-03-12T01:29:41.636938357Z" level=info msg="StartContainer for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\"" Mar 12 01:29:41.705914 systemd[1]: Started cri-containerd-4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087.scope - libcontainer container 4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087. Mar 12 01:29:41.796922 containerd[1466]: time="2026-03-12T01:29:41.796776299Z" level=info msg="StartContainer for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" returns successfully" Mar 12 01:29:42.251888 kubelet[2569]: I0312 01:29:42.250353 2569 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 12 01:29:42.454421 systemd[1]: Created slice kubepods-burstable-pod980a15a2_a809_4d54_815c_1a8dc1920ad8.slice - libcontainer container kubepods-burstable-pod980a15a2_a809_4d54_815c_1a8dc1920ad8.slice. Mar 12 01:29:42.473711 kubelet[2569]: I0312 01:29:42.471147 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwf54\" (UniqueName: \"kubernetes.io/projected/de534575-803d-405b-b088-1772e9813f06-kube-api-access-rwf54\") pod \"coredns-7d764666f9-wh29r\" (UID: \"de534575-803d-405b-b088-1772e9813f06\") " pod="kube-system/coredns-7d764666f9-wh29r" Mar 12 01:29:42.473711 kubelet[2569]: I0312 01:29:42.471186 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/980a15a2-a809-4d54-815c-1a8dc1920ad8-config-volume\") pod \"coredns-7d764666f9-22cfq\" (UID: \"980a15a2-a809-4d54-815c-1a8dc1920ad8\") " pod="kube-system/coredns-7d764666f9-22cfq" Mar 12 01:29:42.473711 kubelet[2569]: I0312 01:29:42.471212 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de534575-803d-405b-b088-1772e9813f06-config-volume\") pod \"coredns-7d764666f9-wh29r\" (UID: \"de534575-803d-405b-b088-1772e9813f06\") " pod="kube-system/coredns-7d764666f9-wh29r" Mar 12 01:29:42.473711 kubelet[2569]: I0312 01:29:42.471236 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxvz\" (UniqueName: \"kubernetes.io/projected/980a15a2-a809-4d54-815c-1a8dc1920ad8-kube-api-access-cmxvz\") pod \"coredns-7d764666f9-22cfq\" (UID: \"980a15a2-a809-4d54-815c-1a8dc1920ad8\") " pod="kube-system/coredns-7d764666f9-22cfq" Mar 12 01:29:42.494750 systemd[1]: Created slice kubepods-burstable-podde534575_803d_405b_b088_1772e9813f06.slice - libcontainer container kubepods-burstable-podde534575_803d_405b_b088_1772e9813f06.slice. Mar 12 01:29:42.504083 kubelet[2569]: E0312 01:29:42.503966 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:42.547798 kubelet[2569]: I0312 01:29:42.547694 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-6m4jd" podStartSLOduration=2.343418569 podStartE2EDuration="33.54767655s" podCreationTimestamp="2026-03-12 01:29:09 +0000 UTC" firstStartedPulling="2026-03-12 01:29:10.290799628 +0000 UTC m=+6.650782833" lastFinishedPulling="2026-03-12 01:29:41.495057619 +0000 UTC m=+37.855040814" observedRunningTime="2026-03-12 01:29:42.545484768 +0000 UTC m=+38.905468013" watchObservedRunningTime="2026-03-12 01:29:42.54767655 +0000 UTC m=+38.907659745" Mar 12 01:29:42.787744 kubelet[2569]: E0312 01:29:42.786750 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:42.810238 kubelet[2569]: E0312 01:29:42.810181 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:42.823345 containerd[1466]: time="2026-03-12T01:29:42.823131481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-22cfq,Uid:980a15a2-a809-4d54-815c-1a8dc1920ad8,Namespace:kube-system,Attempt:0,}" Mar 12 01:29:42.836250 containerd[1466]: time="2026-03-12T01:29:42.836136733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wh29r,Uid:de534575-803d-405b-b088-1772e9813f06,Namespace:kube-system,Attempt:0,}" Mar 12 01:29:43.527675 kubelet[2569]: E0312 01:29:43.524262 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:44.526529 kubelet[2569]: E0312 01:29:44.525962 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:44.806747 systemd-networkd[1406]: cilium_host: Link UP Mar 12 01:29:44.807127 systemd-networkd[1406]: cilium_net: Link UP Mar 12 01:29:44.807133 systemd-networkd[1406]: cilium_net: Gained carrier Mar 12 01:29:44.810411 systemd-networkd[1406]: cilium_host: Gained carrier Mar 12 01:29:44.816858 systemd-networkd[1406]: cilium_host: Gained IPv6LL Mar 12 01:29:45.037491 systemd-networkd[1406]: cilium_net: Gained IPv6LL Mar 12 01:29:45.039332 systemd-networkd[1406]: cilium_vxlan: Link UP Mar 12 01:29:45.039509 systemd-networkd[1406]: cilium_vxlan: Gained carrier Mar 12 01:29:45.602862 kernel: NET: Registered PF_ALG protocol family Mar 12 01:29:46.139159 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Mar 12 01:29:47.485512 systemd-networkd[1406]: lxc_health: Link UP Mar 12 01:29:47.497806 systemd-networkd[1406]: lxc_health: Gained carrier Mar 12 01:29:48.044399 systemd-networkd[1406]: lxc865873fa5626: Link UP Mar 12 01:29:48.053426 systemd-networkd[1406]: lxcddb0291089fe: Link UP Mar 12 01:29:48.087721 kernel: eth0: renamed from tmp5483a Mar 12 01:29:48.105998 kernel: eth0: renamed from tmpf4333 Mar 12 01:29:48.115134 systemd-networkd[1406]: lxc865873fa5626: Gained carrier Mar 12 01:29:48.116121 systemd-networkd[1406]: lxcddb0291089fe: Gained carrier Mar 12 01:29:48.140049 kubelet[2569]: E0312 01:29:48.139984 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:48.549642 kubelet[2569]: E0312 01:29:48.549499 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:48.696982 systemd-networkd[1406]: lxc_health: Gained IPv6LL Mar 12 01:29:49.401005 systemd-networkd[1406]: lxc865873fa5626: Gained IPv6LL Mar 12 01:29:49.913368 systemd-networkd[1406]: lxcddb0291089fe: Gained IPv6LL Mar 12 01:29:54.099725 containerd[1466]: time="2026-03-12T01:29:54.094619376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:29:54.100328 containerd[1466]: time="2026-03-12T01:29:54.099774311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:29:54.100328 containerd[1466]: time="2026-03-12T01:29:54.099882507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:54.100328 containerd[1466]: time="2026-03-12T01:29:54.100068282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:54.107692 containerd[1466]: time="2026-03-12T01:29:54.106709116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:29:54.108740 containerd[1466]: time="2026-03-12T01:29:54.108504442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:29:54.108740 containerd[1466]: time="2026-03-12T01:29:54.108556051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:54.108856 containerd[1466]: time="2026-03-12T01:29:54.108718670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:29:54.130650 systemd[1]: run-containerd-runc-k8s.io-5483a01f4674afaee8b1d39f83b651b7579a1ebfe8e49a640d38b9ac42dd10c4-runc.jwc8bH.mount: Deactivated successfully. Mar 12 01:29:54.150350 systemd[1]: Started cri-containerd-5483a01f4674afaee8b1d39f83b651b7579a1ebfe8e49a640d38b9ac42dd10c4.scope - libcontainer container 5483a01f4674afaee8b1d39f83b651b7579a1ebfe8e49a640d38b9ac42dd10c4. Mar 12 01:29:54.161085 systemd[1]: Started cri-containerd-f4333cf49afe4c0acd8d943cd115d00aa2d6bf014c65286b3cc02e934d0f4895.scope - libcontainer container f4333cf49afe4c0acd8d943cd115d00aa2d6bf014c65286b3cc02e934d0f4895. Mar 12 01:29:54.182719 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:29:54.184959 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:29:54.237760 containerd[1466]: time="2026-03-12T01:29:54.237708448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-22cfq,Uid:980a15a2-a809-4d54-815c-1a8dc1920ad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4333cf49afe4c0acd8d943cd115d00aa2d6bf014c65286b3cc02e934d0f4895\"" Mar 12 01:29:54.248373 kubelet[2569]: E0312 01:29:54.247951 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:54.259910 containerd[1466]: time="2026-03-12T01:29:54.259757071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wh29r,Uid:de534575-803d-405b-b088-1772e9813f06,Namespace:kube-system,Attempt:0,} returns sandbox id \"5483a01f4674afaee8b1d39f83b651b7579a1ebfe8e49a640d38b9ac42dd10c4\"" Mar 12 01:29:54.262840 kubelet[2569]: E0312 01:29:54.262486 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:54.264847 containerd[1466]: time="2026-03-12T01:29:54.264344847Z" level=info msg="CreateContainer within sandbox \"f4333cf49afe4c0acd8d943cd115d00aa2d6bf014c65286b3cc02e934d0f4895\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:29:54.271636 containerd[1466]: time="2026-03-12T01:29:54.271526340Z" level=info msg="CreateContainer within sandbox \"5483a01f4674afaee8b1d39f83b651b7579a1ebfe8e49a640d38b9ac42dd10c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:29:54.305463 containerd[1466]: time="2026-03-12T01:29:54.305290424Z" level=info msg="CreateContainer within sandbox \"5483a01f4674afaee8b1d39f83b651b7579a1ebfe8e49a640d38b9ac42dd10c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d11671b25ce97d102c223fc97e15405cc74e2f93cb6433d2c2f3a85082df802\"" Mar 12 01:29:54.306711 containerd[1466]: time="2026-03-12T01:29:54.306533756Z" level=info msg="StartContainer for \"9d11671b25ce97d102c223fc97e15405cc74e2f93cb6433d2c2f3a85082df802\"" Mar 12 01:29:54.334758 containerd[1466]: time="2026-03-12T01:29:54.334562954Z" level=info msg="CreateContainer within sandbox \"f4333cf49afe4c0acd8d943cd115d00aa2d6bf014c65286b3cc02e934d0f4895\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58a11aacc40e6ce6d592da494fe46f9ea74caa73562be6223efe1e3b99071cb3\"" Mar 12 01:29:54.335803 containerd[1466]: time="2026-03-12T01:29:54.335768972Z" level=info msg="StartContainer for \"58a11aacc40e6ce6d592da494fe46f9ea74caa73562be6223efe1e3b99071cb3\"" Mar 12 01:29:54.348847 systemd[1]: Started cri-containerd-9d11671b25ce97d102c223fc97e15405cc74e2f93cb6433d2c2f3a85082df802.scope - libcontainer container 9d11671b25ce97d102c223fc97e15405cc74e2f93cb6433d2c2f3a85082df802. Mar 12 01:29:54.380998 systemd[1]: Started cri-containerd-58a11aacc40e6ce6d592da494fe46f9ea74caa73562be6223efe1e3b99071cb3.scope - libcontainer container 58a11aacc40e6ce6d592da494fe46f9ea74caa73562be6223efe1e3b99071cb3. Mar 12 01:29:54.400475 containerd[1466]: time="2026-03-12T01:29:54.400345927Z" level=info msg="StartContainer for \"9d11671b25ce97d102c223fc97e15405cc74e2f93cb6433d2c2f3a85082df802\" returns successfully" Mar 12 01:29:54.422718 containerd[1466]: time="2026-03-12T01:29:54.422652646Z" level=info msg="StartContainer for \"58a11aacc40e6ce6d592da494fe46f9ea74caa73562be6223efe1e3b99071cb3\" returns successfully" Mar 12 01:29:54.582350 kubelet[2569]: E0312 01:29:54.582259 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:54.590901 kubelet[2569]: E0312 01:29:54.590650 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:54.634431 kubelet[2569]: I0312 01:29:54.634278 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-22cfq" podStartSLOduration=44.634260941 podStartE2EDuration="44.634260941s" podCreationTimestamp="2026-03-12 01:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:29:54.632414883 +0000 UTC m=+50.992398098" watchObservedRunningTime="2026-03-12 01:29:54.634260941 +0000 UTC m=+50.994244137" Mar 12 01:29:54.634431 kubelet[2569]: I0312 01:29:54.634413 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wh29r" podStartSLOduration=44.634405686 podStartE2EDuration="44.634405686s" podCreationTimestamp="2026-03-12 01:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:29:54.610553974 +0000 UTC m=+50.970537169" watchObservedRunningTime="2026-03-12 01:29:54.634405686 +0000 UTC m=+50.994388880" Mar 12 01:29:55.593754 kubelet[2569]: E0312 01:29:55.593714 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:55.594474 kubelet[2569]: E0312 01:29:55.593807 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:56.595364 kubelet[2569]: E0312 01:29:56.595125 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:29:56.595364 kubelet[2569]: E0312 01:29:56.595303 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:08.971011 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:54240.service - OpenSSH per-connection server daemon (10.0.0.1:54240). Mar 12 01:30:09.030722 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 54240 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:09.033550 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:09.042827 systemd-logind[1452]: New session 10 of user core. Mar 12 01:30:09.053948 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:30:09.292374 sshd[3996]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:09.298059 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:54240.service: Deactivated successfully. Mar 12 01:30:09.300668 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:30:09.301905 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:30:09.304293 systemd-logind[1452]: Removed session 10. Mar 12 01:30:14.311837 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:57238.service - OpenSSH per-connection server daemon (10.0.0.1:57238). Mar 12 01:30:14.352692 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 57238 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:14.355054 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:14.363264 systemd-logind[1452]: New session 11 of user core. Mar 12 01:30:14.374843 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:30:14.531992 sshd[4013]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:14.537676 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:57238.service: Deactivated successfully. Mar 12 01:30:14.540513 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:30:14.542729 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:30:14.545417 systemd-logind[1452]: Removed session 11. Mar 12 01:30:18.814718 kubelet[2569]: E0312 01:30:18.814493 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:19.549712 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:57248.service - OpenSSH per-connection server daemon (10.0.0.1:57248). Mar 12 01:30:19.605881 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 57248 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:19.608206 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:19.616962 systemd-logind[1452]: New session 12 of user core. Mar 12 01:30:19.626908 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:30:19.777723 sshd[4029]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:19.782871 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:57248.service: Deactivated successfully. Mar 12 01:30:19.785833 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:30:19.788716 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:30:19.790738 systemd-logind[1452]: Removed session 12. Mar 12 01:30:24.817142 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:39338.service - OpenSSH per-connection server daemon (10.0.0.1:39338). Mar 12 01:30:24.921500 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 39338 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:24.923563 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:24.943408 systemd-logind[1452]: New session 13 of user core. Mar 12 01:30:24.953993 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:30:25.232810 sshd[4045]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:25.239495 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:39338.service: Deactivated successfully. Mar 12 01:30:25.244568 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:30:25.245763 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:30:25.247732 systemd-logind[1452]: Removed session 13. Mar 12 01:30:30.268491 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:45792.service - OpenSSH per-connection server daemon (10.0.0.1:45792). Mar 12 01:30:30.349541 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 45792 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:30.350527 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:30.373398 systemd-logind[1452]: New session 14 of user core. Mar 12 01:30:30.384893 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:30:30.677762 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:30.688158 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:45792.service: Deactivated successfully. Mar 12 01:30:30.693022 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:30:30.701203 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:30:30.703777 systemd-logind[1452]: Removed session 14. Mar 12 01:30:34.816817 kubelet[2569]: E0312 01:30:34.816721 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:35.695992 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:45794.service - OpenSSH per-connection server daemon (10.0.0.1:45794). Mar 12 01:30:35.750347 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 45794 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:35.752379 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:35.768686 systemd-logind[1452]: New session 15 of user core. Mar 12 01:30:35.778117 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:30:35.950682 sshd[4077]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:35.965083 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:45794.service: Deactivated successfully. Mar 12 01:30:35.968065 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:30:35.969767 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:30:35.971487 systemd-logind[1452]: Removed session 15. Mar 12 01:30:36.815162 kubelet[2569]: E0312 01:30:36.814629 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:36.815162 kubelet[2569]: E0312 01:30:36.814901 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:40.985130 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:60212.service - OpenSSH per-connection server daemon (10.0.0.1:60212). Mar 12 01:30:41.028904 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 60212 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:41.032548 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:41.040928 systemd-logind[1452]: New session 16 of user core. Mar 12 01:30:41.057541 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:30:41.219680 sshd[4094]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:41.236929 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:60212.service: Deactivated successfully. Mar 12 01:30:41.240028 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:30:41.242080 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:30:41.252342 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:60214.service - OpenSSH per-connection server daemon (10.0.0.1:60214). Mar 12 01:30:41.253245 systemd-logind[1452]: Removed session 16. Mar 12 01:30:41.296411 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 60214 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:41.298880 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:41.306332 systemd-logind[1452]: New session 17 of user core. Mar 12 01:30:41.313934 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:30:41.525182 sshd[4110]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:41.539052 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:60214.service: Deactivated successfully. Mar 12 01:30:41.541714 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:30:41.545146 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:30:41.556225 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:60230.service - OpenSSH per-connection server daemon (10.0.0.1:60230). Mar 12 01:30:41.569762 systemd-logind[1452]: Removed session 17. Mar 12 01:30:41.606713 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 60230 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:41.609504 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:41.618555 systemd-logind[1452]: New session 18 of user core. Mar 12 01:30:41.628258 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:30:41.802865 sshd[4124]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:41.809832 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:60230.service: Deactivated successfully. Mar 12 01:30:41.812560 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:30:41.815432 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:30:41.818180 systemd-logind[1452]: Removed session 18. Mar 12 01:30:46.830826 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:60246.service - OpenSSH per-connection server daemon (10.0.0.1:60246). Mar 12 01:30:46.930490 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 60246 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:46.933545 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:46.947303 systemd-logind[1452]: New session 19 of user core. Mar 12 01:30:46.956884 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:30:47.140031 sshd[4138]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:47.149042 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:60246.service: Deactivated successfully. Mar 12 01:30:47.152023 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:30:47.154259 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:30:47.166461 systemd-logind[1452]: Removed session 19. Mar 12 01:30:49.816190 kubelet[2569]: E0312 01:30:49.815196 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:52.153518 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:56246.service - OpenSSH per-connection server daemon (10.0.0.1:56246). Mar 12 01:30:52.203724 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 56246 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:52.206225 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:52.215016 systemd-logind[1452]: New session 20 of user core. Mar 12 01:30:52.222752 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:30:52.388211 sshd[4152]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:52.401824 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:56246.service: Deactivated successfully. Mar 12 01:30:52.407260 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:30:52.413910 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:30:52.417875 systemd-logind[1452]: Removed session 20. Mar 12 01:30:55.820083 kubelet[2569]: E0312 01:30:55.819825 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:30:57.421418 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Mar 12 01:30:57.473509 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:30:57.476289 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:30:57.484738 systemd-logind[1452]: New session 21 of user core. Mar 12 01:30:57.495125 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:30:57.673102 sshd[4167]: pam_unix(sshd:session): session closed for user core Mar 12 01:30:57.681917 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:30:57.682398 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:56260.service: Deactivated successfully. Mar 12 01:30:57.689231 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:30:57.693264 systemd-logind[1452]: Removed session 21. Mar 12 01:31:02.697724 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:38440.service - OpenSSH per-connection server daemon (10.0.0.1:38440). Mar 12 01:31:02.736173 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 38440 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:02.738122 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:02.743222 systemd-logind[1452]: New session 22 of user core. Mar 12 01:31:02.758288 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 01:31:02.917276 sshd[4182]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:02.926819 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:38440.service: Deactivated successfully. Mar 12 01:31:02.929799 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 01:31:02.932192 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Mar 12 01:31:02.940621 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:38446.service - OpenSSH per-connection server daemon (10.0.0.1:38446). Mar 12 01:31:02.942512 systemd-logind[1452]: Removed session 22. Mar 12 01:31:02.982533 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 38446 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:02.984896 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:02.991337 systemd-logind[1452]: New session 23 of user core. Mar 12 01:31:03.003812 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 01:31:03.394097 sshd[4197]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:03.407899 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:38446.service: Deactivated successfully. Mar 12 01:31:03.410639 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 01:31:03.413397 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Mar 12 01:31:03.424089 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:38452.service - OpenSSH per-connection server daemon (10.0.0.1:38452). Mar 12 01:31:03.426181 systemd-logind[1452]: Removed session 23. Mar 12 01:31:03.467395 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 38452 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:03.470047 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:03.476966 systemd-logind[1452]: New session 24 of user core. Mar 12 01:31:03.487882 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 01:31:04.148970 sshd[4210]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:04.166100 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:38452.service: Deactivated successfully. Mar 12 01:31:04.170135 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 01:31:04.173905 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Mar 12 01:31:04.184245 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:38456.service - OpenSSH per-connection server daemon (10.0.0.1:38456). Mar 12 01:31:04.186061 systemd-logind[1452]: Removed session 24. Mar 12 01:31:04.245785 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 38456 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:04.248820 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:04.259420 systemd-logind[1452]: New session 25 of user core. Mar 12 01:31:04.266816 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 01:31:04.546510 sshd[4230]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:04.560240 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:38456.service: Deactivated successfully. Mar 12 01:31:04.562897 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 01:31:04.564143 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Mar 12 01:31:04.572363 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:38460.service - OpenSSH per-connection server daemon (10.0.0.1:38460). Mar 12 01:31:04.575384 systemd-logind[1452]: Removed session 25. Mar 12 01:31:04.612374 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 38460 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:04.614868 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:04.622824 systemd-logind[1452]: New session 26 of user core. Mar 12 01:31:04.632679 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 01:31:04.793181 sshd[4242]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:04.798885 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:38460.service: Deactivated successfully. Mar 12 01:31:04.801382 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 01:31:04.802447 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Mar 12 01:31:04.804136 systemd-logind[1452]: Removed session 26. Mar 12 01:31:09.807868 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:38466.service - OpenSSH per-connection server daemon (10.0.0.1:38466). Mar 12 01:31:09.857239 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 38466 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:09.860910 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:09.867937 systemd-logind[1452]: New session 27 of user core. Mar 12 01:31:09.878836 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 12 01:31:10.035007 sshd[4258]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:10.042217 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:38466.service: Deactivated successfully. Mar 12 01:31:10.044993 systemd[1]: session-27.scope: Deactivated successfully. Mar 12 01:31:10.046433 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Mar 12 01:31:10.048540 systemd-logind[1452]: Removed session 27. Mar 12 01:31:10.815742 kubelet[2569]: E0312 01:31:10.815431 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:15.049518 systemd[1]: Started sshd@27-10.0.0.81:22-10.0.0.1:53240.service - OpenSSH per-connection server daemon (10.0.0.1:53240). Mar 12 01:31:15.097217 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 53240 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:15.099821 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:15.107933 systemd-logind[1452]: New session 28 of user core. Mar 12 01:31:15.125880 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 12 01:31:15.259121 sshd[4277]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:15.264684 systemd[1]: sshd@27-10.0.0.81:22-10.0.0.1:53240.service: Deactivated successfully. Mar 12 01:31:15.268006 systemd[1]: session-28.scope: Deactivated successfully. Mar 12 01:31:15.269177 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Mar 12 01:31:15.270636 systemd-logind[1452]: Removed session 28. Mar 12 01:31:20.272467 systemd[1]: Started sshd@28-10.0.0.81:22-10.0.0.1:37950.service - OpenSSH per-connection server daemon (10.0.0.1:37950). Mar 12 01:31:20.314834 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 37950 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:20.316718 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:20.322494 systemd-logind[1452]: New session 29 of user core. Mar 12 01:31:20.330826 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 12 01:31:20.463370 sshd[4294]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:20.474249 systemd[1]: sshd@28-10.0.0.81:22-10.0.0.1:37950.service: Deactivated successfully. Mar 12 01:31:20.477159 systemd[1]: session-29.scope: Deactivated successfully. Mar 12 01:31:20.479467 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Mar 12 01:31:20.485802 systemd[1]: Started sshd@29-10.0.0.81:22-10.0.0.1:37958.service - OpenSSH per-connection server daemon (10.0.0.1:37958). Mar 12 01:31:20.487319 systemd-logind[1452]: Removed session 29. Mar 12 01:31:20.531305 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 37958 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:20.533719 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:20.542557 systemd-logind[1452]: New session 30 of user core. Mar 12 01:31:20.546834 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 12 01:31:22.111504 containerd[1466]: time="2026-03-12T01:31:22.111448674Z" level=info msg="StopContainer for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" with timeout 30 (s)" Mar 12 01:31:22.113194 containerd[1466]: time="2026-03-12T01:31:22.112720655Z" level=info msg="Stop container \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" with signal terminated" Mar 12 01:31:22.143069 systemd[1]: run-containerd-runc-k8s.io-4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087-runc.DSkJe6.mount: Deactivated successfully. Mar 12 01:31:22.150170 systemd[1]: cri-containerd-90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95.scope: Deactivated successfully. Mar 12 01:31:22.165082 containerd[1466]: time="2026-03-12T01:31:22.165027394Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:31:22.175477 containerd[1466]: time="2026-03-12T01:31:22.175444031Z" level=info msg="StopContainer for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" with timeout 2 (s)" Mar 12 01:31:22.176142 containerd[1466]: time="2026-03-12T01:31:22.176102785Z" level=info msg="Stop container \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" with signal terminated" Mar 12 01:31:22.191925 systemd-networkd[1406]: lxc_health: Link DOWN Mar 12 01:31:22.191975 systemd-networkd[1406]: lxc_health: Lost carrier Mar 12 01:31:22.193639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95-rootfs.mount: Deactivated successfully. Mar 12 01:31:22.224530 containerd[1466]: time="2026-03-12T01:31:22.224086275Z" level=info msg="shim disconnected" id=90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95 namespace=k8s.io Mar 12 01:31:22.224530 containerd[1466]: time="2026-03-12T01:31:22.224172968Z" level=warning msg="cleaning up after shim disconnected" id=90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95 namespace=k8s.io Mar 12 01:31:22.224530 containerd[1466]: time="2026-03-12T01:31:22.224193136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:22.233572 systemd[1]: cri-containerd-4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087.scope: Deactivated successfully. Mar 12 01:31:22.234097 systemd[1]: cri-containerd-4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087.scope: Consumed 12.638s CPU time. Mar 12 01:31:22.250672 containerd[1466]: time="2026-03-12T01:31:22.250528143Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:31:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:31:22.258110 containerd[1466]: time="2026-03-12T01:31:22.258038864Z" level=info msg="StopContainer for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" returns successfully" Mar 12 01:31:22.259646 containerd[1466]: time="2026-03-12T01:31:22.259528927Z" level=info msg="StopPodSandbox for \"9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6\"" Mar 12 01:31:22.259646 containerd[1466]: time="2026-03-12T01:31:22.259634917Z" level=info msg="Container to stop \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:31:22.262269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6-shm.mount: Deactivated successfully. Mar 12 01:31:22.269486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087-rootfs.mount: Deactivated successfully. Mar 12 01:31:22.275297 systemd[1]: cri-containerd-9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6.scope: Deactivated successfully. Mar 12 01:31:22.284008 containerd[1466]: time="2026-03-12T01:31:22.282643924Z" level=info msg="shim disconnected" id=4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087 namespace=k8s.io Mar 12 01:31:22.284008 containerd[1466]: time="2026-03-12T01:31:22.282989197Z" level=warning msg="cleaning up after shim disconnected" id=4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087 namespace=k8s.io Mar 12 01:31:22.284008 containerd[1466]: time="2026-03-12T01:31:22.283010446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:22.303218 containerd[1466]: time="2026-03-12T01:31:22.303026365Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:31:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:31:22.310737 containerd[1466]: time="2026-03-12T01:31:22.310667583Z" level=info msg="StopContainer for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" returns successfully" Mar 12 01:31:22.311709 containerd[1466]: time="2026-03-12T01:31:22.311664363Z" level=info msg="StopPodSandbox for \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\"" Mar 12 01:31:22.311709 containerd[1466]: time="2026-03-12T01:31:22.311725199Z" level=info msg="Container to stop \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:31:22.311709 containerd[1466]: time="2026-03-12T01:31:22.311744585Z" level=info msg="Container to stop \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:31:22.311969 containerd[1466]: time="2026-03-12T01:31:22.311762068Z" level=info msg="Container to stop \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:31:22.311969 containerd[1466]: time="2026-03-12T01:31:22.311778720Z" level=info msg="Container to stop \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:31:22.311969 containerd[1466]: time="2026-03-12T01:31:22.311794460Z" level=info msg="Container to stop \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:31:22.315623 containerd[1466]: time="2026-03-12T01:31:22.314649608Z" level=info msg="shim disconnected" id=9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6 namespace=k8s.io Mar 12 01:31:22.315623 containerd[1466]: time="2026-03-12T01:31:22.314702507Z" level=warning msg="cleaning up after shim disconnected" id=9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6 namespace=k8s.io Mar 12 01:31:22.315623 containerd[1466]: time="2026-03-12T01:31:22.314717085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:22.322210 systemd[1]: cri-containerd-5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57.scope: Deactivated successfully. Mar 12 01:31:22.338332 containerd[1466]: time="2026-03-12T01:31:22.338217263Z" level=info msg="TearDown network for sandbox \"9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6\" successfully" Mar 12 01:31:22.338332 containerd[1466]: time="2026-03-12T01:31:22.338271958Z" level=info msg="StopPodSandbox for \"9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6\" returns successfully" Mar 12 01:31:22.365752 containerd[1466]: time="2026-03-12T01:31:22.365305359Z" level=info msg="shim disconnected" id=5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57 namespace=k8s.io Mar 12 01:31:22.365752 containerd[1466]: time="2026-03-12T01:31:22.365369640Z" level=warning msg="cleaning up after shim disconnected" id=5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57 namespace=k8s.io Mar 12 01:31:22.365752 containerd[1466]: time="2026-03-12T01:31:22.365382634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:22.393793 containerd[1466]: time="2026-03-12T01:31:22.393403374Z" level=info msg="TearDown network for sandbox \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" successfully" Mar 12 01:31:22.393793 containerd[1466]: time="2026-03-12T01:31:22.393441946Z" level=info msg="StopPodSandbox for \"5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57\" returns successfully" Mar 12 01:31:22.459873 kubelet[2569]: I0312 01:31:22.459252 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/cccff58b-6329-4f0d-a95b-6c7e986deb28-kube-api-access-srj8v\" (UniqueName: \"kubernetes.io/projected/cccff58b-6329-4f0d-a95b-6c7e986deb28-kube-api-access-srj8v\") pod \"cccff58b-6329-4f0d-a95b-6c7e986deb28\" (UID: \"cccff58b-6329-4f0d-a95b-6c7e986deb28\") " Mar 12 01:31:22.459873 kubelet[2569]: I0312 01:31:22.459350 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/cccff58b-6329-4f0d-a95b-6c7e986deb28-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cccff58b-6329-4f0d-a95b-6c7e986deb28-cilium-config-path\") pod \"cccff58b-6329-4f0d-a95b-6c7e986deb28\" (UID: \"cccff58b-6329-4f0d-a95b-6c7e986deb28\") " Mar 12 01:31:22.465125 kubelet[2569]: I0312 01:31:22.465070 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cccff58b-6329-4f0d-a95b-6c7e986deb28-cilium-config-path" pod "cccff58b-6329-4f0d-a95b-6c7e986deb28" (UID: "cccff58b-6329-4f0d-a95b-6c7e986deb28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:31:22.466367 kubelet[2569]: I0312 01:31:22.466274 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccff58b-6329-4f0d-a95b-6c7e986deb28-kube-api-access-srj8v" pod "cccff58b-6329-4f0d-a95b-6c7e986deb28" (UID: "cccff58b-6329-4f0d-a95b-6c7e986deb28"). InnerVolumeSpecName "kube-api-access-srj8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:31:22.561362 kubelet[2569]: I0312 01:31:22.561212 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-cgroup\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561362 kubelet[2569]: I0312 01:31:22.561297 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-hostproc\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-hostproc\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561362 kubelet[2569]: I0312 01:31:22.561337 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-kube-api-access-mf5wn\" (UniqueName: \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-kube-api-access-mf5wn\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561362 kubelet[2569]: I0312 01:31:22.561339 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-cgroup" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.561362 kubelet[2569]: I0312 01:31:22.561371 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/4d011ef8-9c31-4a72-be18-b0f8003c6132-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d011ef8-9c31-4a72-be18-b0f8003c6132-clustermesh-secrets\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561798 kubelet[2569]: I0312 01:31:22.561401 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cni-path\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cni-path\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561798 kubelet[2569]: I0312 01:31:22.561459 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-kernel\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561798 kubelet[2569]: I0312 01:31:22.561491 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-xtables-lock\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561798 kubelet[2569]: I0312 01:31:22.561519 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-net\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.561798 kubelet[2569]: I0312 01:31:22.561548 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-hubble-tls\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.562037 kubelet[2569]: I0312 01:31:22.561645 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-lib-modules\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.562037 kubelet[2569]: I0312 01:31:22.561676 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-bpf-maps\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.562037 kubelet[2569]: I0312 01:31:22.561704 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-etc-cni-netd\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.562037 kubelet[2569]: I0312 01:31:22.561729 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-run\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.562037 kubelet[2569]: I0312 01:31:22.561759 2569 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-config-path\") pod \"4d011ef8-9c31-4a72-be18-b0f8003c6132\" (UID: \"4d011ef8-9c31-4a72-be18-b0f8003c6132\") " Mar 12 01:31:22.562241 kubelet[2569]: I0312 01:31:22.561818 2569 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.562241 kubelet[2569]: I0312 01:31:22.561836 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-srj8v\" (UniqueName: \"kubernetes.io/projected/cccff58b-6329-4f0d-a95b-6c7e986deb28-kube-api-access-srj8v\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.562241 kubelet[2569]: I0312 01:31:22.561850 2569 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cccff58b-6329-4f0d-a95b-6c7e986deb28-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.563483 kubelet[2569]: I0312 01:31:22.561440 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-hostproc" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.563483 kubelet[2569]: I0312 01:31:22.562752 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-net" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.563483 kubelet[2569]: I0312 01:31:22.562787 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cni-path" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.563483 kubelet[2569]: I0312 01:31:22.562805 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-kernel" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.563483 kubelet[2569]: I0312 01:31:22.562866 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-xtables-lock" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.563768 kubelet[2569]: I0312 01:31:22.562894 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-bpf-maps" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.568415 kubelet[2569]: I0312 01:31:22.568222 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d011ef8-9c31-4a72-be18-b0f8003c6132-clustermesh-secrets" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:31:22.568415 kubelet[2569]: I0312 01:31:22.568295 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-lib-modules" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.568415 kubelet[2569]: I0312 01:31:22.568321 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-etc-cni-netd" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.568415 kubelet[2569]: I0312 01:31:22.568347 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-run" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:31:22.569890 kubelet[2569]: I0312 01:31:22.569802 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-kube-api-access-mf5wn" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "kube-api-access-mf5wn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:31:22.571171 kubelet[2569]: I0312 01:31:22.571080 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-hubble-tls" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:31:22.571832 kubelet[2569]: I0312 01:31:22.571414 2569 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-config-path" pod "4d011ef8-9c31-4a72-be18-b0f8003c6132" (UID: "4d011ef8-9c31-4a72-be18-b0f8003c6132"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.662919 2569 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663008 2569 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663023 2569 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663036 2569 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663048 2569 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663064 2569 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663077 2569 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d011ef8-9c31-4a72-be18-b0f8003c6132-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.663763 kubelet[2569]: I0312 01:31:22.663090 2569 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.664285 kubelet[2569]: I0312 01:31:22.663103 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mf5wn\" (UniqueName: \"kubernetes.io/projected/4d011ef8-9c31-4a72-be18-b0f8003c6132-kube-api-access-mf5wn\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.664285 kubelet[2569]: I0312 01:31:22.663114 2569 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d011ef8-9c31-4a72-be18-b0f8003c6132-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.664285 kubelet[2569]: I0312 01:31:22.663126 2569 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.664285 kubelet[2569]: I0312 01:31:22.663139 2569 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.664285 kubelet[2569]: I0312 01:31:22.663149 2569 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d011ef8-9c31-4a72-be18-b0f8003c6132-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 12 01:31:22.814856 kubelet[2569]: E0312 01:31:22.814703 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:22.963812 kubelet[2569]: I0312 01:31:22.963561 2569 scope.go:122] "RemoveContainer" containerID="90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95" Mar 12 01:31:22.967560 containerd[1466]: time="2026-03-12T01:31:22.967478219Z" level=info msg="RemoveContainer for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\"" Mar 12 01:31:22.972207 systemd[1]: Removed slice kubepods-besteffort-podcccff58b_6329_4f0d_a95b_6c7e986deb28.slice - libcontainer container kubepods-besteffort-podcccff58b_6329_4f0d_a95b_6c7e986deb28.slice. Mar 12 01:31:22.980819 systemd[1]: Removed slice kubepods-burstable-pod4d011ef8_9c31_4a72_be18_b0f8003c6132.slice - libcontainer container kubepods-burstable-pod4d011ef8_9c31_4a72_be18_b0f8003c6132.slice. Mar 12 01:31:22.981309 systemd[1]: kubepods-burstable-pod4d011ef8_9c31_4a72_be18_b0f8003c6132.slice: Consumed 12.878s CPU time. Mar 12 01:31:22.988431 containerd[1466]: time="2026-03-12T01:31:22.988186275Z" level=info msg="RemoveContainer for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" returns successfully" Mar 12 01:31:22.988748 kubelet[2569]: I0312 01:31:22.988674 2569 scope.go:122] "RemoveContainer" containerID="90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95" Mar 12 01:31:22.996828 containerd[1466]: time="2026-03-12T01:31:22.996766529Z" level=error msg="ContainerStatus for \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\": not found" Mar 12 01:31:23.012066 kubelet[2569]: E0312 01:31:23.011986 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\": not found" containerID="90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95" Mar 12 01:31:23.012221 kubelet[2569]: I0312 01:31:23.012055 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95"} err="failed to get container status \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\": rpc error: code = NotFound desc = an error occurred when try to find container \"90cd7979c22543bfde9eb97665dbb25a3e46bb3353bda527ab019914a8058c95\": not found" Mar 12 01:31:23.012221 kubelet[2569]: I0312 01:31:23.012103 2569 scope.go:122] "RemoveContainer" containerID="4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087" Mar 12 01:31:23.013876 containerd[1466]: time="2026-03-12T01:31:23.013828708Z" level=info msg="RemoveContainer for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\"" Mar 12 01:31:23.020124 containerd[1466]: time="2026-03-12T01:31:23.020036087Z" level=info msg="RemoveContainer for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" returns successfully" Mar 12 01:31:23.020445 kubelet[2569]: I0312 01:31:23.020326 2569 scope.go:122] "RemoveContainer" containerID="df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370" Mar 12 01:31:23.022249 containerd[1466]: time="2026-03-12T01:31:23.022211955Z" level=info msg="RemoveContainer for \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\"" Mar 12 01:31:23.027920 containerd[1466]: time="2026-03-12T01:31:23.027814934Z" level=info msg="RemoveContainer for \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\" returns successfully" Mar 12 01:31:23.028208 kubelet[2569]: I0312 01:31:23.028144 2569 scope.go:122] "RemoveContainer" containerID="c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834" Mar 12 01:31:23.029473 containerd[1466]: time="2026-03-12T01:31:23.029443215Z" level=info msg="RemoveContainer for \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\"" Mar 12 01:31:23.035639 containerd[1466]: time="2026-03-12T01:31:23.035499219Z" level=info msg="RemoveContainer for \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\" returns successfully" Mar 12 01:31:23.037319 kubelet[2569]: I0312 01:31:23.037246 2569 scope.go:122] "RemoveContainer" containerID="e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f" Mar 12 01:31:23.039440 containerd[1466]: time="2026-03-12T01:31:23.039371856Z" level=info msg="RemoveContainer for \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\"" Mar 12 01:31:23.045148 containerd[1466]: time="2026-03-12T01:31:23.044869166Z" level=info msg="RemoveContainer for \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\" returns successfully" Mar 12 01:31:23.045492 kubelet[2569]: I0312 01:31:23.045417 2569 scope.go:122] "RemoveContainer" containerID="56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1" Mar 12 01:31:23.047357 containerd[1466]: time="2026-03-12T01:31:23.047307178Z" level=info msg="RemoveContainer for \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\"" Mar 12 01:31:23.052297 containerd[1466]: time="2026-03-12T01:31:23.052164610Z" level=info msg="RemoveContainer for \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\" returns successfully" Mar 12 01:31:23.052541 kubelet[2569]: I0312 01:31:23.052491 2569 scope.go:122] "RemoveContainer" containerID="4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087" Mar 12 01:31:23.052785 containerd[1466]: time="2026-03-12T01:31:23.052723350Z" level=error msg="ContainerStatus for \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\": not found" Mar 12 01:31:23.053084 kubelet[2569]: E0312 01:31:23.053006 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\": not found" containerID="4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087" Mar 12 01:31:23.053084 kubelet[2569]: I0312 01:31:23.053053 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087"} err="failed to get container status \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\": rpc error: code = NotFound desc = an error occurred when try to find container \"4990b12418699a18022198e00b363276fae113145ee2cc54b68a49f7751a5087\": not found" Mar 12 01:31:23.053084 kubelet[2569]: I0312 01:31:23.053072 2569 scope.go:122] "RemoveContainer" containerID="df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370" Mar 12 01:31:23.053373 containerd[1466]: time="2026-03-12T01:31:23.053212314Z" level=error msg="ContainerStatus for \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\": not found" Mar 12 01:31:23.053440 kubelet[2569]: E0312 01:31:23.053393 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\": not found" containerID="df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370" Mar 12 01:31:23.053440 kubelet[2569]: I0312 01:31:23.053412 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370"} err="failed to get container status \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\": rpc error: code = NotFound desc = an error occurred when try to find container \"df890b8487ddc80b60f7dbfbae6331ba5860948eb2e130c176418941aeefe370\": not found" Mar 12 01:31:23.053440 kubelet[2569]: I0312 01:31:23.053429 2569 scope.go:122] "RemoveContainer" containerID="c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834" Mar 12 01:31:23.053682 containerd[1466]: time="2026-03-12T01:31:23.053648085Z" level=error msg="ContainerStatus for \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\": not found" Mar 12 01:31:23.054052 kubelet[2569]: E0312 01:31:23.053987 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\": not found" containerID="c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834" Mar 12 01:31:23.054174 kubelet[2569]: I0312 01:31:23.054063 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834"} err="failed to get container status \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2af9f9c43ea19add66f2decaca0fd1b4ae51da2cb03ab9dde860866b049f834\": not found" Mar 12 01:31:23.054174 kubelet[2569]: I0312 01:31:23.054102 2569 scope.go:122] "RemoveContainer" containerID="e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f" Mar 12 01:31:23.054552 containerd[1466]: time="2026-03-12T01:31:23.054497347Z" level=error msg="ContainerStatus for \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\": not found" Mar 12 01:31:23.054863 kubelet[2569]: E0312 01:31:23.054821 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\": not found" containerID="e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f" Mar 12 01:31:23.055077 kubelet[2569]: I0312 01:31:23.054863 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f"} err="failed to get container status \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e12dcecd92530172d2fd3654878ba3e565b6977ef6973d73eeeda9530385811f\": not found" Mar 12 01:31:23.055077 kubelet[2569]: I0312 01:31:23.054882 2569 scope.go:122] "RemoveContainer" containerID="56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1" Mar 12 01:31:23.055863 containerd[1466]: time="2026-03-12T01:31:23.055256235Z" level=error msg="ContainerStatus for \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\": not found" Mar 12 01:31:23.055939 kubelet[2569]: E0312 01:31:23.055907 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\": not found" containerID="56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1" Mar 12 01:31:23.056005 kubelet[2569]: I0312 01:31:23.055940 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1"} err="failed to get container status \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"56f71ef4435f8eb30ed1cde0d08921f8a8f888c40dc3bc0e42c706c73e59e2a1\": not found" Mar 12 01:31:23.132055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b923486c1ef3c07992b1aebec69d05716628a0b5090855cdc161e7d5f853aa6-rootfs.mount: Deactivated successfully. Mar 12 01:31:23.132247 systemd[1]: var-lib-kubelet-pods-cccff58b\x2d6329\x2d4f0d\x2da95b\x2d6c7e986deb28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsrj8v.mount: Deactivated successfully. Mar 12 01:31:23.132367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57-rootfs.mount: Deactivated successfully. Mar 12 01:31:23.132480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b8920a89b38e0c8c021590f32ba132cf13432d94081c83a355f2f28886b5d57-shm.mount: Deactivated successfully. Mar 12 01:31:23.132687 systemd[1]: var-lib-kubelet-pods-4d011ef8\x2d9c31\x2d4a72\x2dbe18\x2db0f8003c6132-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmf5wn.mount: Deactivated successfully. Mar 12 01:31:23.132809 systemd[1]: var-lib-kubelet-pods-4d011ef8\x2d9c31\x2d4a72\x2dbe18\x2db0f8003c6132-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 01:31:23.132920 systemd[1]: var-lib-kubelet-pods-4d011ef8\x2d9c31\x2d4a72\x2dbe18\x2db0f8003c6132-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 01:31:23.818818 kubelet[2569]: I0312 01:31:23.818755 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4d011ef8-9c31-4a72-be18-b0f8003c6132" path="/var/lib/kubelet/pods/4d011ef8-9c31-4a72-be18-b0f8003c6132/volumes" Mar 12 01:31:23.819850 kubelet[2569]: I0312 01:31:23.819793 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cccff58b-6329-4f0d-a95b-6c7e986deb28" path="/var/lib/kubelet/pods/cccff58b-6329-4f0d-a95b-6c7e986deb28/volumes" Mar 12 01:31:23.960265 kubelet[2569]: E0312 01:31:23.960145 2569 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 01:31:24.025795 sshd[4309]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:24.042246 systemd[1]: sshd@29-10.0.0.81:22-10.0.0.1:37958.service: Deactivated successfully. Mar 12 01:31:24.045160 systemd[1]: session-30.scope: Deactivated successfully. Mar 12 01:31:24.048406 systemd-logind[1452]: Session 30 logged out. Waiting for processes to exit. Mar 12 01:31:24.055240 systemd[1]: Started sshd@30-10.0.0.81:22-10.0.0.1:37970.service - OpenSSH per-connection server daemon (10.0.0.1:37970). Mar 12 01:31:24.058805 systemd-logind[1452]: Removed session 30. Mar 12 01:31:24.116737 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 37970 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:24.121134 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:24.130172 systemd-logind[1452]: New session 31 of user core. Mar 12 01:31:24.138035 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 12 01:31:24.941170 sshd[4470]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:24.967112 systemd[1]: sshd@30-10.0.0.81:22-10.0.0.1:37970.service: Deactivated successfully. Mar 12 01:31:24.979441 systemd[1]: session-31.scope: Deactivated successfully. Mar 12 01:31:24.983439 systemd-logind[1452]: Session 31 logged out. Waiting for processes to exit. Mar 12 01:31:24.997241 systemd[1]: Started sshd@31-10.0.0.81:22-10.0.0.1:37980.service - OpenSSH per-connection server daemon (10.0.0.1:37980). Mar 12 01:31:25.005149 systemd-logind[1452]: Removed session 31. Mar 12 01:31:25.024941 systemd[1]: Created slice kubepods-burstable-pod366d8d6b_5984_4423_af22_e194e1ba3cd5.slice - libcontainer container kubepods-burstable-pod366d8d6b_5984_4423_af22_e194e1ba3cd5.slice. Mar 12 01:31:25.061563 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 37980 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:25.064314 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:25.074487 systemd-logind[1452]: New session 32 of user core. Mar 12 01:31:25.083804 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 12 01:31:25.086109 kubelet[2569]: I0312 01:31:25.085898 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-hostproc\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086494 kubelet[2569]: I0312 01:31:25.086123 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-lib-modules\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086494 kubelet[2569]: I0312 01:31:25.086156 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/366d8d6b-5984-4423-af22-e194e1ba3cd5-cilium-ipsec-secrets\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086494 kubelet[2569]: I0312 01:31:25.086182 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/366d8d6b-5984-4423-af22-e194e1ba3cd5-hubble-tls\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086494 kubelet[2569]: I0312 01:31:25.086208 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/366d8d6b-5984-4423-af22-e194e1ba3cd5-clustermesh-secrets\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086494 kubelet[2569]: I0312 01:31:25.086235 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-cilium-run\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086494 kubelet[2569]: I0312 01:31:25.086355 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-xtables-lock\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086805 kubelet[2569]: I0312 01:31:25.086386 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-cilium-cgroup\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086805 kubelet[2569]: I0312 01:31:25.086412 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-cni-path\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086805 kubelet[2569]: I0312 01:31:25.086435 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-etc-cni-netd\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086805 kubelet[2569]: I0312 01:31:25.086460 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27dp\" (UniqueName: \"kubernetes.io/projected/366d8d6b-5984-4423-af22-e194e1ba3cd5-kube-api-access-j27dp\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086805 kubelet[2569]: I0312 01:31:25.086489 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-bpf-maps\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.086805 kubelet[2569]: I0312 01:31:25.086517 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-host-proc-sys-net\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.087137 kubelet[2569]: I0312 01:31:25.086541 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/366d8d6b-5984-4423-af22-e194e1ba3cd5-host-proc-sys-kernel\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.087137 kubelet[2569]: I0312 01:31:25.086566 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/366d8d6b-5984-4423-af22-e194e1ba3cd5-cilium-config-path\") pod \"cilium-92hsv\" (UID: \"366d8d6b-5984-4423-af22-e194e1ba3cd5\") " pod="kube-system/cilium-92hsv" Mar 12 01:31:25.151224 sshd[4483]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:25.161067 systemd[1]: sshd@31-10.0.0.81:22-10.0.0.1:37980.service: Deactivated successfully. Mar 12 01:31:25.163377 systemd[1]: session-32.scope: Deactivated successfully. Mar 12 01:31:25.167316 systemd-logind[1452]: Session 32 logged out. Waiting for processes to exit. Mar 12 01:31:25.175556 systemd[1]: Started sshd@32-10.0.0.81:22-10.0.0.1:37986.service - OpenSSH per-connection server daemon (10.0.0.1:37986). Mar 12 01:31:25.179495 systemd-logind[1452]: Removed session 32. Mar 12 01:31:25.240917 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 37986 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:31:25.244050 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:31:25.255269 systemd-logind[1452]: New session 33 of user core. Mar 12 01:31:25.269170 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 12 01:31:25.342814 kubelet[2569]: E0312 01:31:25.342445 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:25.344126 containerd[1466]: time="2026-03-12T01:31:25.343219967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92hsv,Uid:366d8d6b-5984-4423-af22-e194e1ba3cd5,Namespace:kube-system,Attempt:0,}" Mar 12 01:31:25.404115 containerd[1466]: time="2026-03-12T01:31:25.402422016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:31:25.404275 containerd[1466]: time="2026-03-12T01:31:25.402709159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:31:25.404275 containerd[1466]: time="2026-03-12T01:31:25.402777107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:31:25.404275 containerd[1466]: time="2026-03-12T01:31:25.403664179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:31:25.446251 systemd[1]: Started cri-containerd-9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c.scope - libcontainer container 9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c. Mar 12 01:31:25.524166 containerd[1466]: time="2026-03-12T01:31:25.521656250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92hsv,Uid:366d8d6b-5984-4423-af22-e194e1ba3cd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\"" Mar 12 01:31:25.524351 kubelet[2569]: E0312 01:31:25.523303 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:25.536079 containerd[1466]: time="2026-03-12T01:31:25.535035462Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 01:31:25.581844 containerd[1466]: time="2026-03-12T01:31:25.581758983Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6\"" Mar 12 01:31:25.582734 containerd[1466]: time="2026-03-12T01:31:25.582654277Z" level=info msg="StartContainer for \"d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6\"" Mar 12 01:31:25.650858 systemd[1]: Started cri-containerd-d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6.scope - libcontainer container d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6. Mar 12 01:31:25.773991 containerd[1466]: time="2026-03-12T01:31:25.773850865Z" level=info msg="StartContainer for \"d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6\" returns successfully" Mar 12 01:31:25.793223 systemd[1]: cri-containerd-d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6.scope: Deactivated successfully. Mar 12 01:31:25.914084 containerd[1466]: time="2026-03-12T01:31:25.912970617Z" level=info msg="shim disconnected" id=d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6 namespace=k8s.io Mar 12 01:31:25.914084 containerd[1466]: time="2026-03-12T01:31:25.913077018Z" level=warning msg="cleaning up after shim disconnected" id=d38817aabe295cad4c39ccb1ef528d3543be1e01b17a65941edaa472d86243c6 namespace=k8s.io Mar 12 01:31:25.914084 containerd[1466]: time="2026-03-12T01:31:25.913090524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:26.000408 kubelet[2569]: E0312 01:31:25.997459 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:26.815141 kubelet[2569]: E0312 01:31:26.814944 2569 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-wh29r" podUID="de534575-803d-405b-b088-1772e9813f06" Mar 12 01:31:27.004289 kubelet[2569]: E0312 01:31:27.003927 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:27.015693 containerd[1466]: time="2026-03-12T01:31:27.015501267Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 01:31:27.081714 containerd[1466]: time="2026-03-12T01:31:27.081468286Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3\"" Mar 12 01:31:27.082538 containerd[1466]: time="2026-03-12T01:31:27.082466209Z" level=info msg="StartContainer for \"4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3\"" Mar 12 01:31:27.185815 systemd[1]: Started cri-containerd-4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3.scope - libcontainer container 4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3. Mar 12 01:31:27.313388 containerd[1466]: time="2026-03-12T01:31:27.313151499Z" level=info msg="StartContainer for \"4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3\" returns successfully" Mar 12 01:31:27.326389 systemd[1]: cri-containerd-4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3.scope: Deactivated successfully. Mar 12 01:31:27.397872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3-rootfs.mount: Deactivated successfully. Mar 12 01:31:27.426189 containerd[1466]: time="2026-03-12T01:31:27.423193487Z" level=info msg="shim disconnected" id=4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3 namespace=k8s.io Mar 12 01:31:27.426189 containerd[1466]: time="2026-03-12T01:31:27.423271995Z" level=warning msg="cleaning up after shim disconnected" id=4c646d719bc8d0cdb06c57011031582b6654fa80d83a2cf8442bef229c4e6df3 namespace=k8s.io Mar 12 01:31:27.426189 containerd[1466]: time="2026-03-12T01:31:27.423289288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:27.493495 kubelet[2569]: I0312 01:31:27.493400 2569 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T01:31:27Z","lastTransitionTime":"2026-03-12T01:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 12 01:31:28.010478 kubelet[2569]: E0312 01:31:28.007958 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:28.028533 containerd[1466]: time="2026-03-12T01:31:28.028418885Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 01:31:28.103606 containerd[1466]: time="2026-03-12T01:31:28.103501215Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab\"" Mar 12 01:31:28.106735 containerd[1466]: time="2026-03-12T01:31:28.104327993Z" level=info msg="StartContainer for \"cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab\"" Mar 12 01:31:28.194928 systemd[1]: Started cri-containerd-cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab.scope - libcontainer container cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab. Mar 12 01:31:28.275781 containerd[1466]: time="2026-03-12T01:31:28.275540927Z" level=info msg="StartContainer for \"cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab\" returns successfully" Mar 12 01:31:28.278951 systemd[1]: cri-containerd-cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab.scope: Deactivated successfully. Mar 12 01:31:28.324726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab-rootfs.mount: Deactivated successfully. Mar 12 01:31:28.347430 containerd[1466]: time="2026-03-12T01:31:28.347174489Z" level=info msg="shim disconnected" id=cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab namespace=k8s.io Mar 12 01:31:28.347879 containerd[1466]: time="2026-03-12T01:31:28.347839525Z" level=warning msg="cleaning up after shim disconnected" id=cbe8714759e7b5a9a7aea113c675dd5c2f3540b1f09c9a235f77a087435bddab namespace=k8s.io Mar 12 01:31:28.348110 containerd[1466]: time="2026-03-12T01:31:28.347881505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:28.816162 kubelet[2569]: E0312 01:31:28.814454 2569 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-wh29r" podUID="de534575-803d-405b-b088-1772e9813f06" Mar 12 01:31:28.969496 kubelet[2569]: E0312 01:31:28.969320 2569 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 01:31:29.026451 kubelet[2569]: E0312 01:31:29.025733 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:29.040465 containerd[1466]: time="2026-03-12T01:31:29.039977987Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 01:31:29.095354 containerd[1466]: time="2026-03-12T01:31:29.095026269Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689\"" Mar 12 01:31:29.096204 containerd[1466]: time="2026-03-12T01:31:29.096130887Z" level=info msg="StartContainer for \"10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689\"" Mar 12 01:31:29.202894 systemd[1]: Started cri-containerd-10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689.scope - libcontainer container 10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689. Mar 12 01:31:29.272445 systemd[1]: cri-containerd-10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689.scope: Deactivated successfully. Mar 12 01:31:29.280666 containerd[1466]: time="2026-03-12T01:31:29.280528064Z" level=info msg="StartContainer for \"10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689\" returns successfully" Mar 12 01:31:29.339737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689-rootfs.mount: Deactivated successfully. Mar 12 01:31:29.353987 containerd[1466]: time="2026-03-12T01:31:29.353629466Z" level=info msg="shim disconnected" id=10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689 namespace=k8s.io Mar 12 01:31:29.353987 containerd[1466]: time="2026-03-12T01:31:29.353721019Z" level=warning msg="cleaning up after shim disconnected" id=10a43ac0777271e217dd2a567dd7c92608d762a2df7387ea837324d91b944689 namespace=k8s.io Mar 12 01:31:29.353987 containerd[1466]: time="2026-03-12T01:31:29.353736127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:31:30.037184 kubelet[2569]: E0312 01:31:30.037053 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:30.051918 containerd[1466]: time="2026-03-12T01:31:30.051861662Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 01:31:30.127052 containerd[1466]: time="2026-03-12T01:31:30.126656482Z" level=info msg="CreateContainer within sandbox \"9c80cdf51798d5aab4f052ffcc3a7c9f7820674188e0ccf4bd620c363156696c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33\"" Mar 12 01:31:30.129933 containerd[1466]: time="2026-03-12T01:31:30.129803422Z" level=info msg="StartContainer for \"6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33\"" Mar 12 01:31:30.210076 systemd[1]: Started cri-containerd-6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33.scope - libcontainer container 6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33. Mar 12 01:31:30.300833 containerd[1466]: time="2026-03-12T01:31:30.300629021Z" level=info msg="StartContainer for \"6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33\" returns successfully" Mar 12 01:31:30.815007 kubelet[2569]: E0312 01:31:30.814490 2569 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-wh29r" podUID="de534575-803d-405b-b088-1772e9813f06" Mar 12 01:31:31.067963 kubelet[2569]: E0312 01:31:31.066478 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:31.287980 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 12 01:31:32.070535 kubelet[2569]: E0312 01:31:32.070145 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:32.815302 kubelet[2569]: E0312 01:31:32.813980 2569 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-wh29r" podUID="de534575-803d-405b-b088-1772e9813f06" Mar 12 01:31:34.814670 kubelet[2569]: E0312 01:31:34.814276 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:36.530224 systemd-networkd[1406]: lxc_health: Link UP Mar 12 01:31:36.535964 systemd-networkd[1406]: lxc_health: Gained carrier Mar 12 01:31:36.845914 systemd[1]: run-containerd-runc-k8s.io-6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33-runc.UFD6Am.mount: Deactivated successfully. Mar 12 01:31:37.341437 kubelet[2569]: E0312 01:31:37.341258 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:37.388393 kubelet[2569]: I0312 01:31:37.387511 2569 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-92hsv" podStartSLOduration=13.387496113 podStartE2EDuration="13.387496113s" podCreationTimestamp="2026-03-12 01:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:31:31.106500219 +0000 UTC m=+147.466483424" watchObservedRunningTime="2026-03-12 01:31:37.387496113 +0000 UTC m=+153.747479339" Mar 12 01:31:37.691672 systemd-networkd[1406]: lxc_health: Gained IPv6LL Mar 12 01:31:38.103740 kubelet[2569]: E0312 01:31:38.100816 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:39.104642 kubelet[2569]: E0312 01:31:39.103493 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:42.815233 kubelet[2569]: E0312 01:31:42.815135 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:31:43.479056 systemd[1]: run-containerd-runc-k8s.io-6565cce1b98fbcfa12701e4a9dedc104c7c6c086c7f62c991157c54b84629f33-runc.uSUpy9.mount: Deactivated successfully. Mar 12 01:31:45.780519 sshd[4491]: pam_unix(sshd:session): session closed for user core Mar 12 01:31:45.784773 systemd[1]: sshd@32-10.0.0.81:22-10.0.0.1:37986.service: Deactivated successfully. Mar 12 01:31:45.787152 systemd[1]: session-33.scope: Deactivated successfully. Mar 12 01:31:45.789452 systemd-logind[1452]: Session 33 logged out. Waiting for processes to exit. Mar 12 01:31:45.791189 systemd-logind[1452]: Removed session 33. Mar 12 01:31:46.815192 kubelet[2569]: E0312 01:31:46.815041 2569 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"