Mar 10 01:32:21.679205 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:32:21.679255 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:32:21.679274 kernel: BIOS-provided physical RAM map: Mar 10 01:32:21.679282 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:32:21.679290 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:32:21.679298 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:32:21.679311 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 01:32:21.679320 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 01:32:21.679330 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:32:21.679347 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:32:21.679356 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:32:21.679364 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:32:21.680248 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 01:32:21.687488 kernel: NX (Execute Disable) protection: active Mar 10 01:32:21.687639 kernel: APIC: Static calls initialized Mar 10 01:32:21.687719 kernel: SMBIOS 2.8 present. Mar 10 01:32:21.687732 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 01:32:21.687741 kernel: Hypervisor detected: KVM Mar 10 01:32:21.687749 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:32:21.687758 kernel: kvm-clock: using sched offset of 24665858919 cycles Mar 10 01:32:21.687768 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:32:21.687778 kernel: tsc: Detected 2445.424 MHz processor Mar 10 01:32:21.687788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:32:21.687800 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:32:21.687833 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:32:21.687843 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:32:21.687852 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:32:21.687862 kernel: Using GB pages for direct mapping Mar 10 01:32:21.687872 kernel: ACPI: Early table checksum verification disabled Mar 10 01:32:21.687881 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 01:32:21.687891 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687900 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687910 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687923 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 01:32:21.687933 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687943 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687952 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687962 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:32:21.687972 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 01:32:21.687981 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 01:32:21.687996 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 01:32:21.688009 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 01:32:21.688019 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 01:32:21.688029 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 01:32:21.688038 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 01:32:21.688048 kernel: No NUMA configuration found Mar 10 01:32:21.688061 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 01:32:21.690441 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 10 01:32:21.690455 kernel: Zone ranges: Mar 10 01:32:21.690466 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:32:21.690476 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 01:32:21.690486 kernel: Normal empty Mar 10 01:32:21.690495 kernel: Movable zone start for each node Mar 10 01:32:21.690592 kernel: Early memory node ranges Mar 10 01:32:21.690605 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:32:21.690615 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 01:32:21.690685 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 01:32:21.690697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:32:21.690755 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:32:21.690768 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 01:32:21.690778 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:32:21.690788 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:32:21.690798 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:32:21.690808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:32:21.690818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:32:21.690834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:32:21.690846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:32:21.690857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:32:21.690868 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:32:21.690879 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:32:21.690891 kernel: TSC deadline timer available Mar 10 01:32:21.690902 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 01:32:21.690914 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:32:21.690924 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:32:21.690992 kernel: kvm-guest: setup PV sched yield Mar 10 01:32:21.691005 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:32:21.691016 kernel: Booting paravirtualized kernel on KVM Mar 10 01:32:21.691028 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:32:21.691040 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:32:21.691051 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 01:32:21.691063 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 01:32:21.691074 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:32:21.691085 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:32:21.691168 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:32:21.691180 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:32:21.691190 kernel: random: crng init done Mar 10 01:32:21.691200 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:32:21.691211 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:32:21.691221 kernel: Fallback order for Node 0: 0 Mar 10 01:32:21.691232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 10 01:32:21.691242 kernel: Policy zone: DMA32 Mar 10 01:32:21.691252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:32:21.691267 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 136884K reserved, 0K cma-reserved) Mar 10 01:32:21.691277 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:32:21.691288 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:32:21.691298 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:32:21.691308 kernel: Dynamic Preempt: voluntary Mar 10 01:32:21.691319 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:32:21.691331 kernel: rcu: RCU event tracing is enabled. Mar 10 01:32:21.691342 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:32:21.691356 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:32:21.691366 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:32:21.691378 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:32:21.691389 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:32:21.691399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:32:21.691461 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:32:21.691473 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:32:21.691483 kernel: Console: colour VGA+ 80x25 Mar 10 01:32:21.691493 kernel: printk: console [ttyS0] enabled Mar 10 01:32:21.692428 kernel: ACPI: Core revision 20230628 Mar 10 01:32:21.692453 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:32:21.692464 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:32:21.692475 kernel: x2apic enabled Mar 10 01:32:21.692485 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:32:21.692496 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:32:21.692597 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:32:21.692608 kernel: kvm-guest: setup PV IPIs Mar 10 01:32:21.692619 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:32:21.692875 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 01:32:21.692888 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 10 01:32:21.692898 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:32:21.692958 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:32:21.692969 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:32:21.692979 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:32:21.692989 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:32:21.692999 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:32:21.693056 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:32:21.693067 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:32:21.693146 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:32:21.693161 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:32:21.693172 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:32:21.693182 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:32:21.693192 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:32:21.693202 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:32:21.693216 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:32:21.693227 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:32:21.693239 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:32:21.693251 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:32:21.693264 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:32:21.693274 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:32:21.693284 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:32:21.693293 kernel: landlock: Up and running. Mar 10 01:32:21.693303 kernel: SELinux: Initializing. Mar 10 01:32:21.693318 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:32:21.693329 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:32:21.693342 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:32:21.693355 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:32:21.693367 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:32:21.693377 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:32:21.693387 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:32:21.693397 kernel: signal: max sigframe size: 1776 Mar 10 01:32:21.693464 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:32:21.693483 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:32:21.693493 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:32:21.693868 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:32:21.693883 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:32:21.693893 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:32:21.693903 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:32:21.693914 kernel: smpboot: Max logical packages: 1 Mar 10 01:32:21.693926 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 10 01:32:21.693940 kernel: devtmpfs: initialized Mar 10 01:32:21.693957 kernel: x86/mm: Memory block size: 128MB Mar 10 01:32:21.693967 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:32:21.693977 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:32:21.693987 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:32:21.693996 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:32:21.694006 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:32:21.694019 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:32:21.694031 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:32:21.694044 kernel: audit: type=2000 audit(1773106331.264:1): state=initialized audit_enabled=0 res=1 Mar 10 01:32:21.694058 kernel: cpuidle: using governor menu Mar 10 01:32:21.694068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:32:21.694078 kernel: dca service started, version 1.12.1 Mar 10 01:32:21.694088 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:32:21.694164 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:32:21.694175 kernel: PCI: Using configuration type 1 for base access Mar 10 01:32:21.694185 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:32:21.694195 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:32:21.694205 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:32:21.694224 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:32:21.694238 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:32:21.694249 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:32:21.694259 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:32:21.694320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:32:21.694334 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:32:21.694345 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:32:21.694355 kernel: ACPI: Interpreter enabled Mar 10 01:32:21.694365 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:32:21.694432 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:32:21.694445 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:32:21.694455 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:32:21.694464 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:32:21.694474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:32:21.695940 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:32:21.696226 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:32:21.696430 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:32:21.696453 kernel: PCI host bridge to bus 0000:00 Mar 10 01:32:21.697282 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:32:21.697475 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:32:21.698298 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:32:21.701420 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:32:21.702458 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:32:21.703206 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 01:32:21.703422 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:32:21.704410 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:32:21.705197 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 01:32:21.705407 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 10 01:32:21.707006 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 10 01:32:21.707297 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 10 01:32:21.707620 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:32:21.707971 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 01:32:21.710482 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 01:32:21.711035 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 10 01:32:21.719394 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 01:32:21.720739 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:32:21.720965 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 01:32:21.724470 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 10 01:32:21.724836 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 01:32:21.726494 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:32:21.726803 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 10 01:32:21.727009 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 10 01:32:21.727346 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 01:32:21.727698 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 10 01:32:21.731405 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:32:21.731728 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:32:21.732178 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:32:21.734616 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 10 01:32:21.734926 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 10 01:32:21.735318 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:32:21.735669 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 01:32:21.735688 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:32:21.735699 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:32:21.735709 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:32:21.735721 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:32:21.735734 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:32:21.735745 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:32:21.735755 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:32:21.735765 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:32:21.735783 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:32:21.735793 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:32:21.738398 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:32:21.738452 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:32:21.738463 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:32:21.738474 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:32:21.738768 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:32:21.738784 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:32:21.738794 kernel: iommu: Default domain type: Translated Mar 10 01:32:21.738815 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:32:21.738825 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:32:21.738836 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:32:21.738847 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:32:21.738858 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 01:32:21.739212 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:32:21.739473 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:32:21.739783 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:32:21.739813 kernel: vgaarb: loaded Mar 10 01:32:21.739825 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:32:21.739836 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:32:21.739845 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:32:21.739855 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:32:21.739865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:32:21.739875 kernel: pnp: PnP ACPI init Mar 10 01:32:21.742679 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:32:21.742735 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:32:21.742749 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:32:21.742760 kernel: NET: Registered PF_INET protocol family Mar 10 01:32:21.742771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:32:21.742830 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:32:21.742841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:32:21.742852 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:32:21.742864 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:32:21.742876 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:32:21.742893 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:32:21.742904 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:32:21.742916 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:32:21.742927 kernel: NET: Registered PF_XDP protocol family Mar 10 01:32:21.743199 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:32:21.743386 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:32:21.743672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:32:21.743866 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:32:21.744047 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:32:21.744392 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 01:32:21.744411 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:32:21.744423 kernel: Initialise system trusted keyrings Mar 10 01:32:21.744436 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:32:21.744447 kernel: Key type asymmetric registered Mar 10 01:32:21.744459 kernel: Asymmetric key parser 'x509' registered Mar 10 01:32:21.744470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:32:21.744482 kernel: io scheduler mq-deadline registered Mar 10 01:32:21.744493 kernel: io scheduler kyber registered Mar 10 01:32:21.744576 kernel: io scheduler bfq registered Mar 10 01:32:21.744589 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:32:21.744602 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:32:21.744614 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:32:21.744626 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:32:21.744637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:32:21.744649 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:32:21.744661 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:32:21.744673 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:32:21.744689 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:32:21.745178 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:32:21.745200 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:32:21.745387 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:32:21.745403 kernel: hpet: Lost 1 RTC interrupts Mar 10 01:32:21.745670 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:32:19 UTC (1773106339) Mar 10 01:32:21.749622 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:32:21.749644 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:32:21.749664 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:32:21.749677 kernel: Segment Routing with IPv6 Mar 10 01:32:21.749729 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:32:21.749742 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:32:21.749752 kernel: Key type dns_resolver registered Mar 10 01:32:21.749762 kernel: IPI shorthand broadcast: enabled Mar 10 01:32:21.749818 kernel: sched_clock: Marking stable (5026042679, 2439640790)->(9677171715, -2211488246) Mar 10 01:32:21.749830 kernel: registered taskstats version 1 Mar 10 01:32:21.749840 kernel: Loading compiled-in X.509 certificates Mar 10 01:32:21.749858 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:32:21.749869 kernel: Key type .fscrypt registered Mar 10 01:32:21.749881 kernel: Key type fscrypt-provisioning registered Mar 10 01:32:21.749892 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:32:21.749904 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:32:21.749915 kernel: ima: No architecture policies found Mar 10 01:32:21.749926 kernel: clk: Disabling unused clocks Mar 10 01:32:21.749938 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:32:21.749953 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:32:21.749965 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:32:21.750018 kernel: Run /init as init process Mar 10 01:32:21.750030 kernel: with arguments: Mar 10 01:32:21.750042 kernel: /init Mar 10 01:32:21.750053 kernel: with environment: Mar 10 01:32:21.750064 kernel: HOME=/ Mar 10 01:32:21.750075 kernel: TERM=linux Mar 10 01:32:21.750140 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:32:21.750204 systemd[1]: Detected virtualization kvm. Mar 10 01:32:21.750217 systemd[1]: Detected architecture x86-64. Mar 10 01:32:21.750229 systemd[1]: Running in initrd. Mar 10 01:32:21.750241 systemd[1]: No hostname configured, using default hostname. Mar 10 01:32:21.750252 systemd[1]: Hostname set to . Mar 10 01:32:21.750266 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:32:21.750278 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:32:21.750290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:32:21.750340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:32:21.750353 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:32:21.750365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:32:21.750377 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:32:21.750390 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:32:21.750404 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:32:21.750421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:32:21.750433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:32:21.750446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:32:21.750458 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:32:21.750470 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:32:21.750571 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:32:21.750622 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:32:21.750635 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:32:21.750647 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:32:21.750660 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:32:21.750673 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:32:21.750685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:32:21.750698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:32:21.750711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:32:21.750724 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:32:21.750743 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:32:21.750755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:32:21.750766 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:32:21.750776 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:32:21.750838 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:32:21.750851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:32:21.750864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:32:21.750914 systemd-journald[194]: Collecting audit messages is disabled. Mar 10 01:32:21.750947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:32:21.750960 systemd-journald[194]: Journal started Mar 10 01:32:21.750988 systemd-journald[194]: Runtime Journal (/run/log/journal/ff20f94fee90472289f3d63aa38786d3) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:32:21.757361 systemd-modules-load[195]: Inserted module 'overlay' Mar 10 01:32:21.776697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:32:21.821635 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:32:21.836844 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:32:22.248824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:32:22.545484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:32:22.545672 kernel: Bridge firewalling registered Mar 10 01:32:22.352396 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 10 01:32:22.613050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:32:22.643495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:32:22.699173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:32:22.728383 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:32:22.745364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:32:22.853774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:32:22.895031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:32:22.905892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:32:23.021453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:32:23.073048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:32:23.103408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:32:23.187987 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:32:23.209609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:32:23.327266 dracut-cmdline[229]: dracut-dracut-053 Mar 10 01:32:23.329199 systemd-resolved[231]: Positive Trust Anchors: Mar 10 01:32:23.329216 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:32:23.329263 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:32:23.416368 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:32:23.333863 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 10 01:32:23.337415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:32:23.346702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:32:23.733912 kernel: SCSI subsystem initialized Mar 10 01:32:23.807871 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:32:23.880913 kernel: iscsi: registered transport (tcp) Mar 10 01:32:23.945436 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:32:23.945879 kernel: QLogic iSCSI HBA Driver Mar 10 01:32:24.214235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:32:24.280768 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:32:24.491978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:32:24.493486 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:32:24.493607 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:32:24.694761 kernel: raid6: avx2x4 gen() 11809 MB/s Mar 10 01:32:24.712235 kernel: raid6: avx2x2 gen() 13851 MB/s Mar 10 01:32:24.735791 kernel: raid6: avx2x1 gen() 11696 MB/s Mar 10 01:32:24.735878 kernel: raid6: using algorithm avx2x2 gen() 13851 MB/s Mar 10 01:32:24.762464 kernel: raid6: .... xor() 13155 MB/s, rmw enabled Mar 10 01:32:24.770268 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:32:24.832045 kernel: xor: automatically using best checksumming function avx Mar 10 01:32:25.764680 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:32:25.836842 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:32:25.874224 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:32:25.927361 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 10 01:32:25.951636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:32:26.031843 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:32:26.111267 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Mar 10 01:32:26.227722 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:32:26.283754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:32:26.484826 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:32:26.553352 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:32:26.616224 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:32:26.649618 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:32:26.669687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:32:26.757256 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:32:26.998414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:32:27.028271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:32:27.093204 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:32:27.094013 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:32:27.094040 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:32:27.094823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:32:27.094842 kernel: GPT:9289727 != 19775487 Mar 10 01:32:27.028458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:32:27.127440 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:32:27.127474 kernel: GPT:9289727 != 19775487 Mar 10 01:32:27.127494 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:32:27.127674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:32:27.080013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:32:27.136788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:32:27.142985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:32:27.151631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:32:27.282200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:32:27.306951 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:32:27.355895 kernel: libata version 3.00 loaded. Mar 10 01:32:27.418613 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 01:32:27.418809 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:32:27.424216 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:32:27.431619 kernel: AES CTR mode by8 optimization enabled Mar 10 01:32:27.438039 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:32:27.438410 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:32:27.461700 kernel: scsi host0: ahci Mar 10 01:32:27.467185 kernel: scsi host1: ahci Mar 10 01:32:27.471636 kernel: scsi host2: ahci Mar 10 01:32:27.473843 kernel: scsi host3: ahci Mar 10 01:32:27.476088 kernel: scsi host4: ahci Mar 10 01:32:27.482216 kernel: scsi host5: ahci Mar 10 01:32:27.482893 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 10 01:32:27.482918 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 10 01:32:27.482933 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 10 01:32:27.482952 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 10 01:32:27.482966 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 10 01:32:27.482991 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 10 01:32:27.548331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:32:28.451926 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Mar 10 01:32:28.451978 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (472) Mar 10 01:32:28.451997 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:32:28.452012 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:32:28.452026 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:32:28.452043 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:32:28.452059 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:32:28.452075 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:32:28.452091 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:32:28.452181 kernel: ata3.00: applying bridge limits Mar 10 01:32:28.452199 kernel: ata3.00: configured for UDMA/100 Mar 10 01:32:28.452214 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:32:28.458628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:32:28.514992 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:32:28.556387 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:32:28.695939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:32:28.819286 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:32:28.882793 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:32:28.883410 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:32:28.885043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:32:28.904462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:32:28.930899 disk-uuid[565]: Primary Header is updated. Mar 10 01:32:28.930899 disk-uuid[565]: Secondary Entries is updated. Mar 10 01:32:28.930899 disk-uuid[565]: Secondary Header is updated. Mar 10 01:32:28.973860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:32:28.973893 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:32:28.991342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:32:29.031220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:32:29.504481 kernel: hrtimer: interrupt took 10273114 ns Mar 10 01:32:29.990059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:32:29.999647 disk-uuid[566]: The operation has completed successfully. Mar 10 01:32:30.155181 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:32:30.155462 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:32:30.209987 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:32:30.242431 sh[590]: Success Mar 10 01:32:30.369983 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 01:32:30.632888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:32:30.636790 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:32:30.698476 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:32:30.754322 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:32:30.754397 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:32:30.754418 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:32:30.765623 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:32:30.770787 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:32:30.832831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:32:30.853690 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:32:30.901884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:32:30.926625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:32:31.014897 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:32:31.015399 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:32:31.019449 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:32:31.110870 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:32:31.216629 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:32:31.254332 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:32:31.338219 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:32:31.373299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:32:33.159190 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:32:33.220658 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:32:33.282397 ignition[694]: Ignition 2.19.0 Mar 10 01:32:33.282477 ignition[694]: Stage: fetch-offline Mar 10 01:32:33.286055 ignition[694]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:32:33.286241 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:32:33.286672 ignition[694]: parsed url from cmdline: "" Mar 10 01:32:33.286681 ignition[694]: no config URL provided Mar 10 01:32:33.286691 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:32:33.286707 ignition[694]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:32:33.286795 ignition[694]: op(1): [started] loading QEMU firmware config module Mar 10 01:32:33.286804 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:32:33.471339 systemd-networkd[776]: lo: Link UP Mar 10 01:32:33.472103 systemd-networkd[776]: lo: Gained carrier Mar 10 01:32:33.508090 systemd-networkd[776]: Enumeration completed Mar 10 01:32:33.557867 ignition[694]: op(1): [finished] loading QEMU firmware config module Mar 10 01:32:33.508710 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:32:33.527414 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:32:33.527420 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:32:33.547877 systemd[1]: Reached target network.target - Network. Mar 10 01:32:33.548047 systemd-networkd[776]: eth0: Link UP Mar 10 01:32:33.548055 systemd-networkd[776]: eth0: Gained carrier Mar 10 01:32:33.548074 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:32:33.660320 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:32:34.531804 ignition[694]: parsing config with SHA512: 5e06c3fca93d62387655167ce7a598cf8a54182229c5e5f70fb11bfd0fce6a7802cfb97f0da4949529c670447af45ce5f42cbbde9f3bcbabe99b7c9ae184fc4d Mar 10 01:32:34.597620 unknown[694]: fetched base config from "system" Mar 10 01:32:34.597688 unknown[694]: fetched user config from "qemu" Mar 10 01:32:34.609703 ignition[694]: fetch-offline: fetch-offline passed Mar 10 01:32:34.609848 ignition[694]: Ignition finished successfully Mar 10 01:32:34.648114 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:32:34.680788 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:32:34.718029 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:32:35.040903 ignition[783]: Ignition 2.19.0 Mar 10 01:32:35.040982 ignition[783]: Stage: kargs Mar 10 01:32:35.041294 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:32:35.041311 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:32:35.050767 ignition[783]: kargs: kargs passed Mar 10 01:32:35.051088 ignition[783]: Ignition finished successfully Mar 10 01:32:35.117114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:32:35.153642 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:32:35.280717 systemd-networkd[776]: eth0: Gained IPv6LL Mar 10 01:32:35.361853 ignition[792]: Ignition 2.19.0 Mar 10 01:32:35.361871 ignition[792]: Stage: disks Mar 10 01:32:35.362104 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:32:35.362120 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:32:35.371865 ignition[792]: disks: disks passed Mar 10 01:32:35.371942 ignition[792]: Ignition finished successfully Mar 10 01:32:35.409317 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:32:35.422802 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:32:35.441009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:32:35.455402 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:32:35.480894 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:32:35.542724 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:32:35.617088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:32:35.747359 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 01:32:35.825747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:32:35.924735 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:32:36.734000 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:32:36.734658 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:32:36.750081 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:32:36.811098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:32:36.832746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:32:36.923087 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Mar 10 01:32:36.923231 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:32:36.923248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:32:36.923263 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:32:36.875268 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:32:36.875342 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:32:36.875381 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:32:36.997691 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:32:36.943947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:32:36.989471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:32:37.050002 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:32:37.362840 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:32:37.412964 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:32:37.455755 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:32:37.493463 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:32:38.485974 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:32:38.523823 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:32:38.561496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:32:38.645687 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:32:38.693454 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:32:38.841332 ignition[924]: INFO : Ignition 2.19.0 Mar 10 01:32:38.841332 ignition[924]: INFO : Stage: mount Mar 10 01:32:38.841332 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:32:38.841332 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:32:38.856456 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:32:38.943925 ignition[924]: INFO : mount: mount passed Mar 10 01:32:38.943925 ignition[924]: INFO : Ignition finished successfully Mar 10 01:32:38.910131 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:32:38.992326 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:32:39.031996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:32:39.111212 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 10 01:32:39.128350 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:32:39.128438 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:32:39.128455 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:32:39.164442 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:32:39.176824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:32:39.287766 ignition[955]: INFO : Ignition 2.19.0 Mar 10 01:32:39.287766 ignition[955]: INFO : Stage: files Mar 10 01:32:39.298845 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:32:39.298845 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:32:39.298845 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:32:39.339480 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:32:39.339480 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:32:39.376478 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:32:39.394753 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:32:39.394753 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:32:39.394753 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:32:39.394753 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:32:39.382992 unknown[955]: wrote ssh authorized keys file for user: core Mar 10 01:32:39.579891 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:32:40.396255 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:32:40.396255 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:32:40.396255 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 10 01:32:40.484218 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 10 01:32:41.704969 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:32:41.704969 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:32:41.704969 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:32:41.704969 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:32:41.704969 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:32:41.704969 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:32:41.889059 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 10 01:32:42.226027 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 10 01:32:50.227484 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 01:32:50.227484 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 10 01:32:50.265433 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:32:50.694343 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:32:50.760404 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:32:50.760404 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:32:50.760404 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:32:50.760404 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:32:50.760404 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:32:50.760404 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:32:50.760404 ignition[955]: INFO : files: files passed Mar 10 01:32:50.760404 ignition[955]: INFO : Ignition finished successfully Mar 10 01:32:50.770986 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:32:50.845451 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:32:50.930859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:32:51.010103 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:32:51.010485 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:32:51.096879 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:32:51.137253 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:32:51.137253 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:32:51.212904 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:32:51.151907 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:32:51.172988 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:32:51.245444 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:32:51.407857 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:32:51.415676 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:32:51.428137 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:32:51.428345 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:32:51.428675 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:32:51.480953 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:32:51.604494 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:32:51.635084 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:32:51.743903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:32:51.780368 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:32:51.806710 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:32:51.833018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:32:51.833359 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:32:51.903074 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:32:51.913285 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:32:51.927104 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:32:51.967627 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:32:52.037123 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:32:52.088098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:32:52.124388 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:32:52.150075 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:32:52.174132 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:32:52.194739 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:32:52.205454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:32:52.206104 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:32:52.217978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:32:52.230836 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:32:52.231032 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:32:52.231835 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:32:52.242287 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:32:52.242668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:32:52.256813 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:32:52.543018 ignition[1010]: INFO : Ignition 2.19.0 Mar 10 01:32:52.543018 ignition[1010]: INFO : Stage: umount Mar 10 01:32:52.543018 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:32:52.543018 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:32:52.543018 ignition[1010]: INFO : umount: umount passed Mar 10 01:32:52.543018 ignition[1010]: INFO : Ignition finished successfully Mar 10 01:32:52.257124 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:32:52.257835 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:32:52.262821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:32:52.272033 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:32:52.273955 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:32:52.274123 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:32:52.274372 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:32:52.274700 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:32:52.274892 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:32:52.275021 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:32:52.275362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:32:52.275653 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:32:52.284027 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:32:52.284395 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:32:52.426485 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:32:52.448484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:32:52.449406 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:32:52.473175 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:32:52.497079 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:32:52.500163 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:32:52.522145 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:32:52.523040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:32:52.562156 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:32:52.564012 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:32:52.564773 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:32:52.584014 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:32:52.584309 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:32:52.607474 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:32:52.609374 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:32:52.631140 systemd[1]: Stopped target network.target - Network. Mar 10 01:32:52.637855 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:32:52.637965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:32:52.642761 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:32:52.642841 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:32:52.651035 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:32:52.651091 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:32:52.668167 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:32:52.668946 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:32:52.672491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:32:52.672916 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:32:52.678713 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:32:52.694085 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:32:52.770339 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:32:52.771400 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:32:52.772172 systemd-networkd[776]: eth0: DHCPv6 lease lost Mar 10 01:32:52.796810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:32:52.796954 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:32:52.855950 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:32:52.858098 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:32:52.888071 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:32:52.888137 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:32:52.932774 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:32:52.939907 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:32:52.940022 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:32:52.955979 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:32:52.956067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:32:52.997958 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:32:52.998081 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:32:53.352049 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:32:53.390146 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:32:53.390688 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:32:53.417832 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:32:53.417936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:32:53.433983 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:32:53.434077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:32:53.445482 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:32:53.445812 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:32:53.446301 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:32:53.446383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:32:53.446690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:32:53.446773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:32:53.514431 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:32:53.574921 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:32:53.602390 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:32:53.638656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:32:53.638898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:32:53.685073 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:32:53.688385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:32:53.731155 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:32:53.731453 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:32:53.782300 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:32:53.861449 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:32:53.913694 systemd[1]: Switching root. Mar 10 01:32:53.985710 systemd-journald[194]: Journal stopped Mar 10 01:32:59.583314 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 10 01:32:59.583414 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:32:59.583452 kernel: SELinux: policy capability open_perms=1 Mar 10 01:32:59.583469 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:32:59.583498 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:32:59.583643 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:32:59.583661 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:32:59.583676 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:32:59.583765 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:32:59.583783 kernel: audit: type=1403 audit(1773106374.546:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:32:59.583800 systemd[1]: Successfully loaded SELinux policy in 169.975ms. Mar 10 01:32:59.583830 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 70.982ms. Mar 10 01:32:59.583850 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:32:59.583867 systemd[1]: Detected virtualization kvm. Mar 10 01:32:59.583883 systemd[1]: Detected architecture x86-64. Mar 10 01:32:59.583899 systemd[1]: Detected first boot. Mar 10 01:32:59.583920 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:32:59.583941 zram_generator::config[1053]: No configuration found. Mar 10 01:32:59.583958 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:32:59.583974 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:32:59.583990 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:32:59.584007 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:32:59.584024 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:32:59.584046 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:32:59.584063 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:32:59.584085 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:32:59.584101 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:32:59.584118 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:32:59.584135 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:32:59.584156 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:32:59.584173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:32:59.584190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:32:59.584283 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:32:59.584307 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:32:59.584324 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:32:59.584341 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:32:59.584360 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:32:59.584376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:32:59.584400 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:32:59.584418 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:32:59.584434 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:32:59.584456 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:32:59.584474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:32:59.584492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:32:59.584624 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:32:59.584644 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:32:59.584661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:32:59.584680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:32:59.584697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:32:59.584713 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:32:59.584738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:32:59.584754 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:32:59.584770 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:32:59.584790 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:32:59.584806 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:32:59.584822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:32:59.584838 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:32:59.584854 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:32:59.584869 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:32:59.584894 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:32:59.584912 systemd[1]: Reached target machines.target - Containers. Mar 10 01:32:59.584927 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:32:59.584943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:32:59.584959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:32:59.584976 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:32:59.584995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:32:59.585011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:32:59.585033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:32:59.585050 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:32:59.585066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:32:59.585084 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:32:59.585104 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:32:59.585121 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:32:59.585136 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:32:59.585153 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:32:59.585168 kernel: ACPI: bus type drm_connector registered Mar 10 01:32:59.585254 kernel: fuse: init (API version 7.39) Mar 10 01:32:59.585276 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:32:59.585292 kernel: loop: module loaded Mar 10 01:32:59.585309 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:32:59.585329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:32:59.585347 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:32:59.585407 systemd-journald[1137]: Collecting audit messages is disabled. Mar 10 01:32:59.585438 systemd-journald[1137]: Journal started Mar 10 01:32:59.585475 systemd-journald[1137]: Runtime Journal (/run/log/journal/ff20f94fee90472289f3d63aa38786d3) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:32:57.401367 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:32:57.501310 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:32:57.503081 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:32:57.504453 systemd[1]: systemd-journald.service: Consumed 3.001s CPU time. Mar 10 01:32:59.617426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:32:59.635388 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:32:59.635626 systemd[1]: Stopped verity-setup.service. Mar 10 01:32:59.688754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:32:59.701352 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:32:59.713851 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:32:59.723340 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:32:59.745907 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:32:59.753897 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:32:59.814270 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:32:59.842394 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:32:59.863111 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:32:59.917356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:32:59.933290 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:32:59.934824 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:32:59.957367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:32:59.957810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:32:59.980759 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:32:59.981161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:32:59.996791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:32:59.997049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:33:00.015968 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:33:00.016404 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:33:00.024679 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:33:00.025000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:33:00.036435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:33:00.046300 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:33:00.061107 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:33:00.128989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:33:00.159436 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:33:00.197316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:33:00.234285 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:33:00.255876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:33:00.256079 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:33:00.302381 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:33:00.364925 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:33:00.380695 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:33:00.398491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:33:00.404780 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:33:00.422123 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:33:00.439665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:33:00.443771 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:33:00.456738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:33:00.460340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:33:00.493043 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:33:00.512771 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:33:00.557812 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:33:00.603683 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:33:00.639801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:33:00.676453 systemd-journald[1137]: Time spent on flushing to /var/log/journal/ff20f94fee90472289f3d63aa38786d3 is 62.878ms for 951 entries. Mar 10 01:33:00.676453 systemd-journald[1137]: System Journal (/var/log/journal/ff20f94fee90472289f3d63aa38786d3) is 8.0M, max 195.6M, 187.6M free. Mar 10 01:33:00.832816 systemd-journald[1137]: Received client request to flush runtime journal. Mar 10 01:33:00.832904 kernel: loop0: detected capacity change from 0 to 142488 Mar 10 01:33:00.694954 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:33:00.714491 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:33:00.757661 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:33:00.785936 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:33:00.803392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:33:00.828833 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 10 01:33:00.837022 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:33:00.890135 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:33:00.938462 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:33:00.941148 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:33:00.978717 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:33:00.996751 kernel: loop1: detected capacity change from 0 to 219192 Mar 10 01:33:01.021032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:33:01.961470 kernel: loop2: detected capacity change from 0 to 140768 Mar 10 01:33:02.191441 kernel: loop3: detected capacity change from 0 to 142488 Mar 10 01:33:02.351785 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 10 01:33:02.351814 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 10 01:33:02.364830 kernel: loop4: detected capacity change from 0 to 219192 Mar 10 01:33:02.388621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:33:03.532925 kernel: loop5: detected capacity change from 0 to 140768 Mar 10 01:33:03.686978 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:33:03.689087 (sd-merge)[1192]: Merged extensions into '/usr'. Mar 10 01:33:03.709680 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:33:03.709705 systemd[1]: Reloading... Mar 10 01:33:04.020666 zram_generator::config[1220]: No configuration found. Mar 10 01:33:04.574439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:33:04.683842 systemd[1]: Reloading finished in 972 ms. Mar 10 01:33:04.781343 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:33:05.205693 systemd[1]: Starting ensure-sysext.service... Mar 10 01:33:05.228389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:33:05.242708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:33:05.281873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:33:05.334986 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:33:05.335006 systemd[1]: Reloading... Mar 10 01:33:05.385890 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:33:05.386632 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:33:05.389355 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:33:05.393367 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 10 01:33:05.393710 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 10 01:33:05.402362 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:33:05.402433 systemd-tmpfiles[1258]: Skipping /boot Mar 10 01:33:05.418383 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Mar 10 01:33:05.436068 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:33:05.436086 systemd-tmpfiles[1258]: Skipping /boot Mar 10 01:33:05.470833 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:33:05.616716 zram_generator::config[1291]: No configuration found. Mar 10 01:33:05.831119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1307) Mar 10 01:33:05.960419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 10 01:33:05.996958 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:33:06.030420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:33:06.060864 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:33:06.066075 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:33:06.079138 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:33:06.159420 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 10 01:33:06.187683 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:33:06.244884 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:33:06.246280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:33:06.260082 systemd[1]: Reloading finished in 924 ms. Mar 10 01:33:06.356408 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:33:06.370339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:33:06.427334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:33:06.537116 systemd[1]: Finished ensure-sysext.service. Mar 10 01:33:06.596840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:33:06.653326 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:33:06.688785 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:33:06.722841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:33:06.733956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:33:06.762971 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:33:06.777869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:33:06.802840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:33:06.811115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:33:06.816396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:33:06.837665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:33:06.871990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:33:06.887311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:33:06.907412 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:33:06.929455 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:33:06.940154 augenrules[1386]: No rules Mar 10 01:33:06.943961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:33:06.957454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:33:06.960473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:33:06.961089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:33:06.973868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:33:06.982606 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:33:06.982907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:33:06.992932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:33:06.993197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:33:07.010748 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:33:07.012973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:33:07.026419 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:33:07.038059 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:33:07.053161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:33:07.102823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:33:07.109060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:33:07.110892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:33:07.334359 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:33:07.777798 kernel: kvm_amd: TSC scaling supported Mar 10 01:33:07.777930 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:33:07.777998 kernel: kvm_amd: Nested Paging enabled Mar 10 01:33:07.778062 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:33:07.778091 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:33:08.131385 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:33:08.158054 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:33:08.168997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:33:08.197352 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:33:08.401351 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:33:08.754347 systemd-networkd[1380]: lo: Link UP Mar 10 01:33:08.754364 systemd-networkd[1380]: lo: Gained carrier Mar 10 01:33:08.762368 systemd-networkd[1380]: Enumeration completed Mar 10 01:33:08.762894 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:33:08.770311 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:33:08.770324 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:33:08.774657 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:33:08.778929 systemd-networkd[1380]: eth0: Link UP Mar 10 01:33:08.779019 systemd-networkd[1380]: eth0: Gained carrier Mar 10 01:33:08.779099 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:33:08.780322 systemd-resolved[1382]: Positive Trust Anchors: Mar 10 01:33:08.780344 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:33:08.780389 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:33:08.785631 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:33:08.794013 systemd-resolved[1382]: Defaulting to hostname 'linux'. Mar 10 01:33:08.832901 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:33:08.833982 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:33:08.835902 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Mar 10 01:33:08.844457 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:33:08.861094 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:33:08.861341 systemd-timesyncd[1383]: Initial clock synchronization to Tue 2026-03-10 01:33:08.745415 UTC. Mar 10 01:33:08.861924 systemd[1]: Reached target network.target - Network. Mar 10 01:33:08.875769 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:33:08.923833 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:33:08.970356 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:33:09.007660 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:33:09.056161 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:33:09.121765 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:33:09.136465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:33:09.144967 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:33:09.157293 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:33:09.173260 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:33:09.185864 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:33:09.196309 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:33:09.215075 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:33:09.230934 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:33:09.231761 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:33:09.239828 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:33:09.252094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:33:09.270920 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:33:09.295263 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:33:09.313936 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:33:09.334982 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:33:09.344889 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:33:09.357908 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:33:09.367970 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:33:09.368084 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:33:09.390062 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:33:09.406786 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:33:09.407974 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:33:09.427197 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:33:09.451789 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:33:09.452472 jq[1428]: false Mar 10 01:33:09.463412 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:33:09.468221 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:33:09.484740 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:33:09.511839 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:33:09.515865 dbus-daemon[1427]: [system] SELinux support is enabled Mar 10 01:33:09.525161 extend-filesystems[1429]: Found loop3 Mar 10 01:33:09.525161 extend-filesystems[1429]: Found loop4 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found loop5 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found sr0 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda1 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda2 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda3 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found usr Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda4 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda6 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda7 Mar 10 01:33:09.537942 extend-filesystems[1429]: Found vda9 Mar 10 01:33:09.537942 extend-filesystems[1429]: Checking size of /dev/vda9 Mar 10 01:33:09.638832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1302) Mar 10 01:33:09.638944 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:33:09.527333 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:33:09.639133 extend-filesystems[1429]: Resized partition /dev/vda9 Mar 10 01:33:09.644758 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:33:09.669993 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:33:09.683155 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:33:09.683919 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:33:09.685468 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:33:09.710219 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:33:09.723766 jq[1450]: true Mar 10 01:33:09.725085 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:33:09.742325 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:33:09.780984 update_engine[1448]: I20260310 01:33:09.780877 1448 main.cc:92] Flatcar Update Engine starting Mar 10 01:33:09.788620 update_engine[1448]: I20260310 01:33:09.787829 1448 update_check_scheduler.cc:74] Next update check in 5m18s Mar 10 01:33:09.788370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:33:09.788958 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:33:09.789763 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:33:09.790169 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:33:09.800178 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:33:09.816136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:33:09.818025 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:33:09.873703 jq[1454]: true Mar 10 01:33:09.870225 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Mar 10 01:33:09.897099 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:33:09.897099 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:33:09.897099 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:33:09.870250 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:33:09.926179 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Mar 10 01:33:09.912006 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 01:33:09.871759 systemd-logind[1446]: New seat seat0. Mar 10 01:33:09.880425 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:33:09.888166 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:33:09.888736 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:33:09.937963 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:33:09.942254 tar[1453]: linux-amd64/LICENSE Mar 10 01:33:09.943235 tar[1453]: linux-amd64/helm Mar 10 01:33:09.954642 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:33:09.965777 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:33:09.966051 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:33:09.985063 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:33:09.985235 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:33:10.014058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:33:10.050619 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:33:10.084004 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:33:10.089613 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:33:10.104197 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:33:10.125218 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:33:10.159754 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:33:10.160094 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:33:10.228124 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:33:10.228969 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:33:10.264699 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:33:10.296719 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:33:10.325705 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:33:10.358939 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:33:10.373372 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:33:10.401957 systemd-networkd[1380]: eth0: Gained IPv6LL Mar 10 01:33:10.407591 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:33:10.425023 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:33:10.452279 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:33:10.455383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:10.466765 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:33:10.531367 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:33:10.533712 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:33:10.551720 containerd[1456]: time="2026-03-10T01:33:10.551150193Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:33:10.554356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:33:10.579693 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:33:10.619969 containerd[1456]: time="2026-03-10T01:33:10.618294046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.632018 containerd[1456]: time="2026-03-10T01:33:10.631960311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.632977719Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.633015349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.633355513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.633398881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.633644456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.633669651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.634013377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.634036564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.634054686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.634068813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.634263374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.635305 containerd[1456]: time="2026-03-10T01:33:10.634752287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:33:10.636446 containerd[1456]: time="2026-03-10T01:33:10.634979938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:33:10.636446 containerd[1456]: time="2026-03-10T01:33:10.635002393Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:33:10.636446 containerd[1456]: time="2026-03-10T01:33:10.635141528Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:33:10.636446 containerd[1456]: time="2026-03-10T01:33:10.635255614Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:33:10.662713 containerd[1456]: time="2026-03-10T01:33:10.661951427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:33:10.662713 containerd[1456]: time="2026-03-10T01:33:10.662071023Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:33:10.662713 containerd[1456]: time="2026-03-10T01:33:10.662096268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:33:10.662713 containerd[1456]: time="2026-03-10T01:33:10.662118407Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:33:10.662713 containerd[1456]: time="2026-03-10T01:33:10.662205172Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:33:10.662713 containerd[1456]: time="2026-03-10T01:33:10.662455495Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:33:10.662988 containerd[1456]: time="2026-03-10T01:33:10.662885848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:33:10.663197 containerd[1456]: time="2026-03-10T01:33:10.663040670Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:33:10.663197 containerd[1456]: time="2026-03-10T01:33:10.663068725Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:33:10.663197 containerd[1456]: time="2026-03-10T01:33:10.663086185Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:33:10.663197 containerd[1456]: time="2026-03-10T01:33:10.663107196Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663197591Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663222391Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663241810Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663262919Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663280705Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663297839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663315 containerd[1456]: time="2026-03-10T01:33:10.663315120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663340851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663361407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663378619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663398136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663415359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663433402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663450111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663635 containerd[1456]: time="2026-03-10T01:33:10.663467758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663637183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663675980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663697851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663717122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663734116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663756404Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663784843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663802600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.663876 containerd[1456]: time="2026-03-10T01:33:10.663849717Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.663918320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.663944326Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.663961143Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.663977970Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.663992105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.664009506Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.664030578Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:33:10.664197 containerd[1456]: time="2026-03-10T01:33:10.664045049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.667393375Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.667473126Z" level=info msg="Connect containerd service" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.667641057Z" level=info msg="using legacy CRI server" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.667654936Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.667771269Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.668796243Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669280785Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669327972Z" level=info msg="Start subscribing containerd event" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669357291Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669636581Z" level=info msg="Start recovering state" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669843318Z" level=info msg="Start event monitor" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669860808Z" level=info msg="Start snapshots syncer" Mar 10 01:33:10.670272 containerd[1456]: time="2026-03-10T01:33:10.669874212Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:33:10.680724 containerd[1456]: time="2026-03-10T01:33:10.674772582Z" level=info msg="Start streaming server" Mar 10 01:33:10.680724 containerd[1456]: time="2026-03-10T01:33:10.675055767Z" level=info msg="containerd successfully booted in 0.126955s" Mar 10 01:33:10.675163 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:33:11.323467 tar[1453]: linux-amd64/README.md Mar 10 01:33:11.357342 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:33:12.600667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:12.611311 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:33:12.612005 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:33:12.628206 systemd[1]: Startup finished in 5.516s (kernel) + 34.224s (initrd) + 18.243s (userspace) = 57.983s. Mar 10 01:33:14.076286 kubelet[1540]: E0310 01:33:14.076098 1540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:33:14.085267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:33:14.085986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:33:14.087250 systemd[1]: kubelet.service: Consumed 1.686s CPU time. Mar 10 01:33:19.181375 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:33:19.235155 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:38272.service - OpenSSH per-connection server daemon (10.0.0.1:38272). Mar 10 01:33:19.764372 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 38272 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:19.785111 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:19.844831 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:33:19.861770 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:33:19.898498 systemd-logind[1446]: New session 1 of user core. Mar 10 01:33:19.919239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:33:19.937760 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:33:19.956344 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:33:20.238935 systemd[1558]: Queued start job for default target default.target. Mar 10 01:33:20.253682 systemd[1558]: Created slice app.slice - User Application Slice. Mar 10 01:33:20.253722 systemd[1558]: Reached target paths.target - Paths. Mar 10 01:33:20.253744 systemd[1558]: Reached target timers.target - Timers. Mar 10 01:33:20.257455 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:33:20.329971 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:33:20.331653 systemd[1558]: Reached target sockets.target - Sockets. Mar 10 01:33:20.331720 systemd[1558]: Reached target basic.target - Basic System. Mar 10 01:33:20.331805 systemd[1558]: Reached target default.target - Main User Target. Mar 10 01:33:20.331875 systemd[1558]: Startup finished in 356ms. Mar 10 01:33:20.331902 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:33:20.345876 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:33:20.470580 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:38284.service - OpenSSH per-connection server daemon (10.0.0.1:38284). Mar 10 01:33:20.608596 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 38284 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:20.612216 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:20.638105 systemd-logind[1446]: New session 2 of user core. Mar 10 01:33:20.644955 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:33:20.753015 sshd[1569]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:20.764986 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:38284.service: Deactivated successfully. Mar 10 01:33:20.768338 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:33:20.777033 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:33:20.783578 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:38292.service - OpenSSH per-connection server daemon (10.0.0.1:38292). Mar 10 01:33:20.785394 systemd-logind[1446]: Removed session 2. Mar 10 01:33:20.878821 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 38292 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:20.881881 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:20.910093 systemd-logind[1446]: New session 3 of user core. Mar 10 01:33:20.919274 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:33:21.006149 sshd[1576]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:21.035851 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:38292.service: Deactivated successfully. Mar 10 01:33:21.041878 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:33:21.044729 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:33:21.065059 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:38298.service - OpenSSH per-connection server daemon (10.0.0.1:38298). Mar 10 01:33:21.073377 systemd-logind[1446]: Removed session 3. Mar 10 01:33:21.155850 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 38298 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:21.157226 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:21.171043 systemd-logind[1446]: New session 4 of user core. Mar 10 01:33:21.182252 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:33:21.291654 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:21.307485 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:38298.service: Deactivated successfully. Mar 10 01:33:21.311333 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:33:21.314864 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:33:21.331473 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:38306.service - OpenSSH per-connection server daemon (10.0.0.1:38306). Mar 10 01:33:21.332910 systemd-logind[1446]: Removed session 4. Mar 10 01:33:21.396388 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 38306 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:21.398379 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:21.410317 systemd-logind[1446]: New session 5 of user core. Mar 10 01:33:21.422488 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:33:21.506477 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:33:21.508217 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:33:21.543419 sudo[1594]: pam_unix(sudo:session): session closed for user root Mar 10 01:33:21.547664 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:21.563626 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:38306.service: Deactivated successfully. Mar 10 01:33:21.566708 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:33:21.570300 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:33:21.580919 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:38312.service - OpenSSH per-connection server daemon (10.0.0.1:38312). Mar 10 01:33:21.584478 systemd-logind[1446]: Removed session 5. Mar 10 01:33:21.641685 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 38312 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:21.648035 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:21.667360 systemd-logind[1446]: New session 6 of user core. Mar 10 01:33:21.680052 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:33:21.764615 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:33:21.766010 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:33:21.781047 sudo[1604]: pam_unix(sudo:session): session closed for user root Mar 10 01:33:21.798328 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:33:21.799307 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:33:21.850084 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:33:21.856323 auditctl[1607]: No rules Mar 10 01:33:21.857498 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:33:21.858358 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:33:21.882013 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:33:21.960709 augenrules[1625]: No rules Mar 10 01:33:21.963952 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:33:21.971362 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 10 01:33:21.976900 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:22.001095 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:38312.service: Deactivated successfully. Mar 10 01:33:22.004424 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:33:22.008350 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:33:22.021645 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:38322.service - OpenSSH per-connection server daemon (10.0.0.1:38322). Mar 10 01:33:22.032784 systemd-logind[1446]: Removed session 6. Mar 10 01:33:22.121583 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:33:22.128172 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:22.149931 systemd-logind[1446]: New session 7 of user core. Mar 10 01:33:22.169263 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:33:22.248641 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:33:22.249209 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:33:22.970491 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:33:22.972793 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:33:23.982615 dockerd[1654]: time="2026-03-10T01:33:23.981220831Z" level=info msg="Starting up" Mar 10 01:33:24.236371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:33:24.252422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:24.372137 dockerd[1654]: time="2026-03-10T01:33:24.371042481Z" level=info msg="Loading containers: start." Mar 10 01:33:24.752084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:24.774156 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:33:24.969719 kubelet[1718]: E0310 01:33:24.969346 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:33:24.990928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:33:24.991297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:33:25.000883 kernel: Initializing XFRM netlink socket Mar 10 01:33:25.495921 systemd-networkd[1380]: docker0: Link UP Mar 10 01:33:25.594498 dockerd[1654]: time="2026-03-10T01:33:25.594292968Z" level=info msg="Loading containers: done." Mar 10 01:33:25.631464 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1279632844-merged.mount: Deactivated successfully. Mar 10 01:33:25.649917 dockerd[1654]: time="2026-03-10T01:33:25.649298491Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:33:25.650200 dockerd[1654]: time="2026-03-10T01:33:25.650082110Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:33:25.650936 dockerd[1654]: time="2026-03-10T01:33:25.650257989Z" level=info msg="Daemon has completed initialization" Mar 10 01:33:25.816650 dockerd[1654]: time="2026-03-10T01:33:25.816472373Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:33:25.816911 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:33:27.374721 containerd[1456]: time="2026-03-10T01:33:27.373474694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 10 01:33:28.944200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60939311.mount: Deactivated successfully. Mar 10 01:33:35.205892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:33:35.224704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:36.304459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:36.347389 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:33:37.434201 kubelet[1884]: E0310 01:33:37.427164 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:33:37.448990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:33:37.451854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:33:37.474464 systemd[1]: kubelet.service: Consumed 1.786s CPU time. Mar 10 01:33:43.179329 containerd[1456]: time="2026-03-10T01:33:43.176238919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:43.200385 containerd[1456]: time="2026-03-10T01:33:43.197430068Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 10 01:33:43.204949 containerd[1456]: time="2026-03-10T01:33:43.204460177Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:43.224904 containerd[1456]: time="2026-03-10T01:33:43.224689174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:43.231296 containerd[1456]: time="2026-03-10T01:33:43.230773896Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 15.857149539s" Mar 10 01:33:43.231296 containerd[1456]: time="2026-03-10T01:33:43.230877100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 10 01:33:43.297369 containerd[1456]: time="2026-03-10T01:33:43.291023152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 10 01:33:47.545660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 01:33:47.563919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:48.445891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:48.478879 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:33:48.995159 kubelet[1905]: E0310 01:33:48.993357 1905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:33:49.019094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:33:49.020087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:33:49.022783 systemd[1]: kubelet.service: Consumed 1.050s CPU time. Mar 10 01:33:49.510065 containerd[1456]: time="2026-03-10T01:33:49.509265629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:49.514273 containerd[1456]: time="2026-03-10T01:33:49.513831987Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 10 01:33:49.519711 containerd[1456]: time="2026-03-10T01:33:49.519593964Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:49.531074 containerd[1456]: time="2026-03-10T01:33:49.530750947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:49.538889 containerd[1456]: time="2026-03-10T01:33:49.538578714Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 6.237644332s" Mar 10 01:33:49.538889 containerd[1456]: time="2026-03-10T01:33:49.538630401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 10 01:33:49.561921 containerd[1456]: time="2026-03-10T01:33:49.561142873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 10 01:33:52.735269 containerd[1456]: time="2026-03-10T01:33:52.734254431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:52.738651 containerd[1456]: time="2026-03-10T01:33:52.737590748Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 10 01:33:52.741157 containerd[1456]: time="2026-03-10T01:33:52.739896692Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:52.758848 containerd[1456]: time="2026-03-10T01:33:52.758714211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:52.763655 containerd[1456]: time="2026-03-10T01:33:52.763343480Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 3.201955646s" Mar 10 01:33:52.763655 containerd[1456]: time="2026-03-10T01:33:52.763578061Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 10 01:33:52.766217 containerd[1456]: time="2026-03-10T01:33:52.765415439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 10 01:33:54.911450 update_engine[1448]: I20260310 01:33:54.738984 1448 update_attempter.cc:509] Updating boot flags... Mar 10 01:33:56.190682 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1925) Mar 10 01:33:56.406949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1926) Mar 10 01:33:59.045213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 01:33:59.069172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:59.082104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043827406.mount: Deactivated successfully. Mar 10 01:33:59.830346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:59.838876 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:34:00.262087 kubelet[1947]: E0310 01:34:00.255905 1947 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:34:00.268760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:34:00.269077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:34:01.454269 containerd[1456]: time="2026-03-10T01:34:01.449909813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:01.462851 containerd[1456]: time="2026-03-10T01:34:01.461316063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 10 01:34:01.465684 containerd[1456]: time="2026-03-10T01:34:01.465399879Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:01.474805 containerd[1456]: time="2026-03-10T01:34:01.473488006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:01.474915 containerd[1456]: time="2026-03-10T01:34:01.474878114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 8.708809676s" Mar 10 01:34:01.474945 containerd[1456]: time="2026-03-10T01:34:01.474913642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 10 01:34:01.478402 containerd[1456]: time="2026-03-10T01:34:01.477999377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 10 01:34:02.608733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839858520.mount: Deactivated successfully. Mar 10 01:34:10.296447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 01:34:10.322749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:34:11.434414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:34:11.448101 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:34:11.522477 containerd[1456]: time="2026-03-10T01:34:11.520844165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:11.526857 containerd[1456]: time="2026-03-10T01:34:11.526803742Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 10 01:34:11.538660 containerd[1456]: time="2026-03-10T01:34:11.537463896Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:11.544313 containerd[1456]: time="2026-03-10T01:34:11.544222330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:11.545749 containerd[1456]: time="2026-03-10T01:34:11.545666826Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 10.06762488s" Mar 10 01:34:11.545825 containerd[1456]: time="2026-03-10T01:34:11.545757206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 10 01:34:11.547971 containerd[1456]: time="2026-03-10T01:34:11.547145667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 01:34:11.937357 kubelet[2017]: E0310 01:34:11.935594 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:34:11.967054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:34:11.967445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:34:11.975078 systemd[1]: kubelet.service: Consumed 1.108s CPU time. Mar 10 01:34:12.546005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497558441.mount: Deactivated successfully. Mar 10 01:34:12.572672 containerd[1456]: time="2026-03-10T01:34:12.572434164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:12.584340 containerd[1456]: time="2026-03-10T01:34:12.580732812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 01:34:12.618683 containerd[1456]: time="2026-03-10T01:34:12.617872705Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:12.685956 containerd[1456]: time="2026-03-10T01:34:12.682427082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:12.825432 containerd[1456]: time="2026-03-10T01:34:12.808062616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.260837368s" Mar 10 01:34:12.825432 containerd[1456]: time="2026-03-10T01:34:12.814380573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 01:34:12.882738 containerd[1456]: time="2026-03-10T01:34:12.881752516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 10 01:34:13.912887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617274443.mount: Deactivated successfully. Mar 10 01:34:21.686396 containerd[1456]: time="2026-03-10T01:34:21.685177196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:21.703257 containerd[1456]: time="2026-03-10T01:34:21.697481990Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 10 01:34:21.711038 containerd[1456]: time="2026-03-10T01:34:21.709424258Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:21.778445 containerd[1456]: time="2026-03-10T01:34:21.771609101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:21.803773 containerd[1456]: time="2026-03-10T01:34:21.771944830Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 8.889990707s" Mar 10 01:34:21.803773 containerd[1456]: time="2026-03-10T01:34:21.786463437Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 10 01:34:22.298616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 10 01:34:23.035486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:34:24.212111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:34:24.260580 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:34:24.463181 kubelet[2121]: E0310 01:34:24.462623 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:34:24.469138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:34:24.469611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:34:30.540879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:34:30.569267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:34:30.679842 systemd[1]: Reloading requested from client PID 2141 ('systemctl') (unit session-7.scope)... Mar 10 01:34:30.680062 systemd[1]: Reloading... Mar 10 01:34:30.898623 zram_generator::config[2176]: No configuration found. Mar 10 01:34:31.219255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:34:31.409894 systemd[1]: Reloading finished in 727 ms. Mar 10 01:34:31.610647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:34:31.626851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:34:31.628307 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:34:31.629055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:34:31.647447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:34:32.142991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:34:32.170225 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:34:32.458199 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:34:32.458199 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:34:32.458199 kubelet[2229]: I0310 01:34:32.448307 2229 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:34:34.671384 kubelet[2229]: I0310 01:34:34.668345 2229 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 01:34:34.678017 kubelet[2229]: I0310 01:34:34.676809 2229 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:34:34.678017 kubelet[2229]: I0310 01:34:34.676915 2229 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:34:34.678017 kubelet[2229]: I0310 01:34:34.676934 2229 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:34:34.678017 kubelet[2229]: I0310 01:34:34.677210 2229 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:34:34.868928 kubelet[2229]: I0310 01:34:34.866075 2229 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:34:34.870378 kubelet[2229]: E0310 01:34:34.870326 2229 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:34:34.904449 kubelet[2229]: E0310 01:34:34.901751 2229 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:34:34.904449 kubelet[2229]: I0310 01:34:34.901824 2229 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:34:34.926028 kubelet[2229]: I0310 01:34:34.923929 2229 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:34:34.934264 kubelet[2229]: I0310 01:34:34.933058 2229 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:34:34.934264 kubelet[2229]: I0310 01:34:34.933452 2229 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:34:34.938647 kubelet[2229]: I0310 01:34:34.934971 2229 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:34:34.938647 kubelet[2229]: I0310 01:34:34.934991 2229 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 01:34:34.938647 kubelet[2229]: I0310 01:34:34.935142 2229 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:34:34.947935 kubelet[2229]: I0310 01:34:34.945820 2229 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:34:34.947935 kubelet[2229]: I0310 01:34:34.946181 2229 kubelet.go:475] "Attempting to sync node with API server" Mar 10 01:34:34.947935 kubelet[2229]: I0310 01:34:34.946202 2229 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:34:34.947935 kubelet[2229]: I0310 01:34:34.946232 2229 kubelet.go:387] "Adding apiserver pod source" Mar 10 01:34:34.947935 kubelet[2229]: I0310 01:34:34.946252 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:34:34.970292 kubelet[2229]: E0310 01:34:34.969229 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:34:34.970292 kubelet[2229]: I0310 01:34:34.969427 2229 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:34:34.977129 kubelet[2229]: E0310 01:34:34.973777 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:34:34.977129 kubelet[2229]: I0310 01:34:34.975295 2229 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:34:34.977129 kubelet[2229]: I0310 01:34:34.975375 2229 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:34:34.977129 kubelet[2229]: W0310 01:34:34.975870 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:34:35.006873 kubelet[2229]: I0310 01:34:35.001192 2229 server.go:1262] "Started kubelet" Mar 10 01:34:35.021319 kubelet[2229]: I0310 01:34:35.020871 2229 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:34:35.029968 kubelet[2229]: I0310 01:34:35.027856 2229 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:34:35.029968 kubelet[2229]: I0310 01:34:35.029073 2229 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:34:35.029968 kubelet[2229]: I0310 01:34:35.029494 2229 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:34:35.083171 kubelet[2229]: E0310 01:34:35.075614 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b56f1d763820a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:34:35.00115201 +0000 UTC m=+2.819897342,LastTimestamp:2026-03-10 01:34:35.00115201 +0000 UTC m=+2.819897342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:34:35.088030 kubelet[2229]: E0310 01:34:35.085501 2229 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:34:35.088030 kubelet[2229]: I0310 01:34:35.086011 2229 server.go:310] "Adding debug handlers to kubelet server" Mar 10 01:34:35.156782 kubelet[2229]: I0310 01:34:35.155835 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:34:35.156782 kubelet[2229]: I0310 01:34:35.157034 2229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:34:35.171905 kubelet[2229]: E0310 01:34:35.168489 2229 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:34:35.171905 kubelet[2229]: I0310 01:34:35.169240 2229 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 01:34:35.174644 kubelet[2229]: I0310 01:34:35.172471 2229 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:34:35.174644 kubelet[2229]: I0310 01:34:35.172884 2229 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:34:35.174644 kubelet[2229]: E0310 01:34:35.173838 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:34:35.174644 kubelet[2229]: E0310 01:34:35.173942 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Mar 10 01:34:35.174644 kubelet[2229]: I0310 01:34:35.174490 2229 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:34:35.175149 kubelet[2229]: I0310 01:34:35.175053 2229 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:34:35.179024 kubelet[2229]: I0310 01:34:35.178847 2229 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:34:35.276902 kubelet[2229]: E0310 01:34:35.275527 2229 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:34:35.378180 kubelet[2229]: E0310 01:34:35.376266 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Mar 10 01:34:35.378180 kubelet[2229]: E0310 01:34:35.376448 2229 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:34:35.390844 kubelet[2229]: I0310 01:34:35.390136 2229 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:34:35.390844 kubelet[2229]: I0310 01:34:35.390217 2229 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:34:35.390844 kubelet[2229]: I0310 01:34:35.390250 2229 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:34:35.459742 kubelet[2229]: I0310 01:34:35.442141 2229 policy_none.go:49] "None policy: Start" Mar 10 01:34:35.459742 kubelet[2229]: I0310 01:34:35.442947 2229 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:34:35.459742 kubelet[2229]: I0310 01:34:35.443307 2229 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:34:35.479999 kubelet[2229]: I0310 01:34:35.465952 2229 policy_none.go:47] "Start" Mar 10 01:34:35.479999 kubelet[2229]: E0310 01:34:35.478788 2229 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:34:35.494558 kubelet[2229]: I0310 01:34:35.492975 2229 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:34:35.514606 kubelet[2229]: I0310 01:34:35.513917 2229 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:34:35.514606 kubelet[2229]: I0310 01:34:35.513958 2229 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 01:34:35.514606 kubelet[2229]: I0310 01:34:35.514089 2229 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 01:34:35.517811 kubelet[2229]: E0310 01:34:35.514294 2229 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:34:35.518137 kubelet[2229]: E0310 01:34:35.518022 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:34:35.519417 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:34:35.546442 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:34:35.566434 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:34:35.582111 kubelet[2229]: E0310 01:34:35.580156 2229 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:34:35.587822 kubelet[2229]: E0310 01:34:35.587200 2229 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:34:35.595129 kubelet[2229]: I0310 01:34:35.590730 2229 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:34:35.595129 kubelet[2229]: I0310 01:34:35.590794 2229 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:34:35.595129 kubelet[2229]: I0310 01:34:35.592610 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:34:35.601837 kubelet[2229]: E0310 01:34:35.601730 2229 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:34:35.601837 kubelet[2229]: E0310 01:34:35.601788 2229 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:34:35.718837 kubelet[2229]: I0310 01:34:35.717135 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:35.731148 kubelet[2229]: E0310 01:34:35.724098 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Mar 10 01:34:35.751051 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 10 01:34:35.786394 kubelet[2229]: I0310 01:34:35.783968 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:35.786394 kubelet[2229]: I0310 01:34:35.784010 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:35.786394 kubelet[2229]: I0310 01:34:35.784044 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67878fe3bf5562691b14866aa164ca85-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"67878fe3bf5562691b14866aa164ca85\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:34:35.786394 kubelet[2229]: I0310 01:34:35.784125 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:35.786394 kubelet[2229]: I0310 01:34:35.784150 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:35.789618 kubelet[2229]: I0310 01:34:35.784218 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:35.789618 kubelet[2229]: I0310 01:34:35.784242 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:34:35.789618 kubelet[2229]: I0310 01:34:35.784263 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67878fe3bf5562691b14866aa164ca85-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"67878fe3bf5562691b14866aa164ca85\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:34:35.789618 kubelet[2229]: I0310 01:34:35.784287 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67878fe3bf5562691b14866aa164ca85-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"67878fe3bf5562691b14866aa164ca85\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:34:35.789618 kubelet[2229]: E0310 01:34:35.785185 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Mar 10 01:34:35.840032 kubelet[2229]: E0310 01:34:35.839059 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:35.890086 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 10 01:34:35.900972 kubelet[2229]: E0310 01:34:35.900866 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:35.930767 kubelet[2229]: E0310 01:34:35.929158 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:35.936828 containerd[1456]: time="2026-03-10T01:34:35.935648566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 10 01:34:35.937995 kubelet[2229]: I0310 01:34:35.937465 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:35.938111 kubelet[2229]: E0310 01:34:35.938030 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Mar 10 01:34:35.980081 systemd[1]: Created slice kubepods-burstable-pod67878fe3bf5562691b14866aa164ca85.slice - libcontainer container kubepods-burstable-pod67878fe3bf5562691b14866aa164ca85.slice. Mar 10 01:34:35.989281 kubelet[2229]: E0310 01:34:35.988247 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:36.001900 kubelet[2229]: E0310 01:34:35.998892 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:36.002037 containerd[1456]: time="2026-03-10T01:34:35.999614336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:67878fe3bf5562691b14866aa164ca85,Namespace:kube-system,Attempt:0,}" Mar 10 01:34:36.162201 kubelet[2229]: E0310 01:34:36.161193 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:36.162464 containerd[1456]: time="2026-03-10T01:34:36.162274545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 10 01:34:36.478468 kubelet[2229]: E0310 01:34:36.477922 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:34:36.479937 kubelet[2229]: I0310 01:34:36.479307 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:36.483480 kubelet[2229]: E0310 01:34:36.483309 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Mar 10 01:34:36.562418 kubelet[2229]: E0310 01:34:36.548370 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:34:36.587481 kubelet[2229]: E0310 01:34:36.586632 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" Mar 10 01:34:36.631442 kubelet[2229]: E0310 01:34:36.602105 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:34:36.669023 kubelet[2229]: E0310 01:34:36.666830 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:34:36.979457 kubelet[2229]: E0310 01:34:36.978045 2229 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:34:37.310461 kubelet[2229]: I0310 01:34:37.309186 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:37.310461 kubelet[2229]: E0310 01:34:37.310128 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Mar 10 01:34:37.582444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427580989.mount: Deactivated successfully. Mar 10 01:34:37.630480 containerd[1456]: time="2026-03-10T01:34:37.628485103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:34:37.678919 containerd[1456]: time="2026-03-10T01:34:37.677040372Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:34:37.684646 containerd[1456]: time="2026-03-10T01:34:37.682101422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:34:37.684646 containerd[1456]: time="2026-03-10T01:34:37.683471536Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:34:37.687212 containerd[1456]: time="2026-03-10T01:34:37.687052675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:34:37.688987 containerd[1456]: time="2026-03-10T01:34:37.688907784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:34:37.693589 containerd[1456]: time="2026-03-10T01:34:37.692662731Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:34:37.696449 containerd[1456]: time="2026-03-10T01:34:37.696272315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:34:37.699583 containerd[1456]: time="2026-03-10T01:34:37.698351685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.762279808s" Mar 10 01:34:37.701656 containerd[1456]: time="2026-03-10T01:34:37.701358197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.701599993s" Mar 10 01:34:37.719150 containerd[1456]: time="2026-03-10T01:34:37.713657230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.551292056s" Mar 10 01:34:38.194055 kubelet[2229]: E0310 01:34:38.190207 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="3.2s" Mar 10 01:34:38.212164 kubelet[2229]: E0310 01:34:38.211873 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:34:38.903068 kubelet[2229]: E0310 01:34:38.881851 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:34:38.988243 kubelet[2229]: I0310 01:34:38.979140 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:38.988243 kubelet[2229]: E0310 01:34:38.987621 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Mar 10 01:34:39.013196 kubelet[2229]: E0310 01:34:39.013133 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:34:39.305300 kubelet[2229]: E0310 01:34:39.305158 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:34:39.796394 containerd[1456]: time="2026-03-10T01:34:39.795993002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:34:39.806704 containerd[1456]: time="2026-03-10T01:34:39.803262477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:34:39.806704 containerd[1456]: time="2026-03-10T01:34:39.803310267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.806704 containerd[1456]: time="2026-03-10T01:34:39.804827324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.806937 containerd[1456]: time="2026-03-10T01:34:39.805888063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:34:39.806937 containerd[1456]: time="2026-03-10T01:34:39.806025660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:34:39.806937 containerd[1456]: time="2026-03-10T01:34:39.806208793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.806937 containerd[1456]: time="2026-03-10T01:34:39.806426511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.822490 containerd[1456]: time="2026-03-10T01:34:39.822273764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:34:39.829952 containerd[1456]: time="2026-03-10T01:34:39.822965287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:34:39.829952 containerd[1456]: time="2026-03-10T01:34:39.823001545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.829952 containerd[1456]: time="2026-03-10T01:34:39.823131658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:40.075607 systemd[1]: run-containerd-runc-k8s.io-d6fabba675721339e8d6ffd260ee1cac89a69af5bb6d3719a5a464ca07f135e6-runc.QYs9GM.mount: Deactivated successfully. Mar 10 01:34:40.129939 systemd[1]: Started cri-containerd-d6fabba675721339e8d6ffd260ee1cac89a69af5bb6d3719a5a464ca07f135e6.scope - libcontainer container d6fabba675721339e8d6ffd260ee1cac89a69af5bb6d3719a5a464ca07f135e6. Mar 10 01:34:40.321146 systemd[1]: Started cri-containerd-a311f05567113627cb1f04eed0676ef58c9a24341b66f9ea156c24f074338b00.scope - libcontainer container a311f05567113627cb1f04eed0676ef58c9a24341b66f9ea156c24f074338b00. Mar 10 01:34:40.374688 systemd[1]: Started cri-containerd-164e1ce85ce3de52df716893a10b508d006c9871fcf0492435017eb2d3cbd521.scope - libcontainer container 164e1ce85ce3de52df716893a10b508d006c9871fcf0492435017eb2d3cbd521. Mar 10 01:34:40.757851 containerd[1456]: time="2026-03-10T01:34:40.756717912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6fabba675721339e8d6ffd260ee1cac89a69af5bb6d3719a5a464ca07f135e6\"" Mar 10 01:34:40.766231 kubelet[2229]: E0310 01:34:40.765986 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:40.807364 containerd[1456]: time="2026-03-10T01:34:40.806459064Z" level=info msg="CreateContainer within sandbox \"d6fabba675721339e8d6ffd260ee1cac89a69af5bb6d3719a5a464ca07f135e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:34:40.808321 containerd[1456]: time="2026-03-10T01:34:40.808201907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a311f05567113627cb1f04eed0676ef58c9a24341b66f9ea156c24f074338b00\"" Mar 10 01:34:40.811847 kubelet[2229]: E0310 01:34:40.811682 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:40.836272 containerd[1456]: time="2026-03-10T01:34:40.831411135Z" level=info msg="CreateContainer within sandbox \"a311f05567113627cb1f04eed0676ef58c9a24341b66f9ea156c24f074338b00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:34:40.866219 containerd[1456]: time="2026-03-10T01:34:40.866160675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:67878fe3bf5562691b14866aa164ca85,Namespace:kube-system,Attempt:0,} returns sandbox id \"164e1ce85ce3de52df716893a10b508d006c9871fcf0492435017eb2d3cbd521\"" Mar 10 01:34:40.869546 kubelet[2229]: E0310 01:34:40.869402 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:40.893585 containerd[1456]: time="2026-03-10T01:34:40.893459231Z" level=info msg="CreateContainer within sandbox \"164e1ce85ce3de52df716893a10b508d006c9871fcf0492435017eb2d3cbd521\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:34:40.926500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446387880.mount: Deactivated successfully. Mar 10 01:34:41.010200 containerd[1456]: time="2026-03-10T01:34:41.007436283Z" level=info msg="CreateContainer within sandbox \"a311f05567113627cb1f04eed0676ef58c9a24341b66f9ea156c24f074338b00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66658b16ca138a1ed2b87eface345d0902228c9e1cf464b32a8da8a0aff7d923\"" Mar 10 01:34:41.011638 containerd[1456]: time="2026-03-10T01:34:41.011483336Z" level=info msg="StartContainer for \"66658b16ca138a1ed2b87eface345d0902228c9e1cf464b32a8da8a0aff7d923\"" Mar 10 01:34:41.040423 containerd[1456]: time="2026-03-10T01:34:41.040259776Z" level=info msg="CreateContainer within sandbox \"d6fabba675721339e8d6ffd260ee1cac89a69af5bb6d3719a5a464ca07f135e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e06ee7071d1bb99a8ff9b6707834b8d957232728c9e48ae6c41de737ef2b026c\"" Mar 10 01:34:41.049274 containerd[1456]: time="2026-03-10T01:34:41.049218806Z" level=info msg="StartContainer for \"e06ee7071d1bb99a8ff9b6707834b8d957232728c9e48ae6c41de737ef2b026c\"" Mar 10 01:34:41.069687 containerd[1456]: time="2026-03-10T01:34:41.065985846Z" level=info msg="CreateContainer within sandbox \"164e1ce85ce3de52df716893a10b508d006c9871fcf0492435017eb2d3cbd521\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4847801872a8acb86afdef8cd8b508bd56a6901e15177762306ae7e9213d3fa\"" Mar 10 01:34:41.069687 containerd[1456]: time="2026-03-10T01:34:41.068879408Z" level=info msg="StartContainer for \"c4847801872a8acb86afdef8cd8b508bd56a6901e15177762306ae7e9213d3fa\"" Mar 10 01:34:41.226185 kubelet[2229]: E0310 01:34:41.222671 2229 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:34:41.472626 kubelet[2229]: E0310 01:34:41.471341 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b56f1d763820a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:34:35.00115201 +0000 UTC m=+2.819897342,LastTimestamp:2026-03-10 01:34:35.00115201 +0000 UTC m=+2.819897342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:34:41.497903 kubelet[2229]: E0310 01:34:41.472834 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="6.4s" Mar 10 01:34:41.648962 systemd[1]: Started cri-containerd-c4847801872a8acb86afdef8cd8b508bd56a6901e15177762306ae7e9213d3fa.scope - libcontainer container c4847801872a8acb86afdef8cd8b508bd56a6901e15177762306ae7e9213d3fa. Mar 10 01:34:41.676162 systemd[1]: Started cri-containerd-e06ee7071d1bb99a8ff9b6707834b8d957232728c9e48ae6c41de737ef2b026c.scope - libcontainer container e06ee7071d1bb99a8ff9b6707834b8d957232728c9e48ae6c41de737ef2b026c. Mar 10 01:34:41.719111 systemd[1]: Started cri-containerd-66658b16ca138a1ed2b87eface345d0902228c9e1cf464b32a8da8a0aff7d923.scope - libcontainer container 66658b16ca138a1ed2b87eface345d0902228c9e1cf464b32a8da8a0aff7d923. Mar 10 01:34:42.231234 containerd[1456]: time="2026-03-10T01:34:42.230848480Z" level=info msg="StartContainer for \"66658b16ca138a1ed2b87eface345d0902228c9e1cf464b32a8da8a0aff7d923\" returns successfully" Mar 10 01:34:42.238262 kubelet[2229]: I0310 01:34:42.237855 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:42.240200 kubelet[2229]: E0310 01:34:42.239690 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Mar 10 01:34:42.268987 containerd[1456]: time="2026-03-10T01:34:42.268128799Z" level=info msg="StartContainer for \"c4847801872a8acb86afdef8cd8b508bd56a6901e15177762306ae7e9213d3fa\" returns successfully" Mar 10 01:34:42.366983 containerd[1456]: time="2026-03-10T01:34:42.364849094Z" level=info msg="StartContainer for \"e06ee7071d1bb99a8ff9b6707834b8d957232728c9e48ae6c41de737ef2b026c\" returns successfully" Mar 10 01:34:42.749054 kubelet[2229]: E0310 01:34:42.748188 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:34:43.117289 kubelet[2229]: E0310 01:34:43.117200 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:43.117923 kubelet[2229]: E0310 01:34:43.117648 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:43.126038 kubelet[2229]: E0310 01:34:43.125150 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:43.126038 kubelet[2229]: E0310 01:34:43.125371 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:43.135285 kubelet[2229]: E0310 01:34:43.133167 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:43.135285 kubelet[2229]: E0310 01:34:43.133866 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:43.285347 kubelet[2229]: E0310 01:34:43.283620 2229 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:34:44.174085 kubelet[2229]: E0310 01:34:44.172464 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:44.174085 kubelet[2229]: E0310 01:34:44.172908 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:44.174085 kubelet[2229]: E0310 01:34:44.173293 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:44.174085 kubelet[2229]: E0310 01:34:44.173411 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:44.175947 kubelet[2229]: E0310 01:34:44.175702 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:44.175947 kubelet[2229]: E0310 01:34:44.175883 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:45.216455 kubelet[2229]: E0310 01:34:45.213835 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:45.216455 kubelet[2229]: E0310 01:34:45.214197 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:45.216455 kubelet[2229]: E0310 01:34:45.214675 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:45.216455 kubelet[2229]: E0310 01:34:45.215088 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:45.605906 kubelet[2229]: E0310 01:34:45.604108 2229 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:34:48.647060 kubelet[2229]: I0310 01:34:48.646668 2229 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:34:48.797302 kubelet[2229]: E0310 01:34:48.795064 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:48.797302 kubelet[2229]: E0310 01:34:48.796884 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:49.660989 kubelet[2229]: E0310 01:34:49.660464 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:49.660989 kubelet[2229]: E0310 01:34:49.660901 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:51.712495 kubelet[2229]: E0310 01:34:51.708123 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:34:51.819677 kubelet[2229]: E0310 01:34:51.819294 2229 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:34:51.819677 kubelet[2229]: E0310 01:34:51.819673 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:51.938685 kubelet[2229]: I0310 01:34:51.937659 2229 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:34:51.967320 kubelet[2229]: E0310 01:34:51.966490 2229 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b56f1d763820a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:34:35.00115201 +0000 UTC m=+2.819897342,LastTimestamp:2026-03-10 01:34:35.00115201 +0000 UTC m=+2.819897342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:34:51.974935 kubelet[2229]: I0310 01:34:51.973687 2229 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:34:52.059376 kubelet[2229]: E0310 01:34:52.050094 2229 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 01:34:52.059376 kubelet[2229]: I0310 01:34:52.050428 2229 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:52.108631 kubelet[2229]: I0310 01:34:52.107276 2229 apiserver.go:52] "Watching apiserver" Mar 10 01:34:52.126296 kubelet[2229]: E0310 01:34:52.126164 2229 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:34:52.126296 kubelet[2229]: I0310 01:34:52.126238 2229 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:34:52.148591 kubelet[2229]: E0310 01:34:52.148406 2229 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 01:34:52.173448 kubelet[2229]: I0310 01:34:52.173381 2229 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:34:58.770364 kubelet[2229]: I0310 01:34:58.770071 2229 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:34:58.815103 kubelet[2229]: E0310 01:34:58.814864 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:58.830346 systemd[1]: Reloading requested from client PID 2529 ('systemctl') (unit session-7.scope)... Mar 10 01:34:58.830379 systemd[1]: Reloading... Mar 10 01:34:59.136879 zram_generator::config[2568]: No configuration found. Mar 10 01:34:59.389708 kubelet[2229]: E0310 01:34:59.379397 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:59.696128 kubelet[2229]: I0310 01:34:59.694204 2229 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:34:59.701741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:34:59.732341 kubelet[2229]: E0310 01:34:59.729166 2229 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:59.949471 kubelet[2229]: I0310 01:34:59.947993 2229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.947919453 podStartE2EDuration="1.947919453s" podCreationTimestamp="2026-03-10 01:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:34:59.928439567 +0000 UTC m=+27.747184899" watchObservedRunningTime="2026-03-10 01:34:59.947919453 +0000 UTC m=+27.766664785" Mar 10 01:34:59.963099 systemd[1]: Reloading finished in 1131 ms. Mar 10 01:35:00.110901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:35:00.140410 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:35:00.140977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:35:00.141161 systemd[1]: kubelet.service: Consumed 7.639s CPU time, 129.2M memory peak, 0B memory swap peak. Mar 10 01:35:00.173729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:35:00.806146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:35:00.806692 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:35:01.006268 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:35:01.006268 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:35:01.006268 kubelet[2612]: I0310 01:35:01.006236 2612 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:35:01.045092 kubelet[2612]: I0310 01:35:01.043704 2612 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 01:35:01.045092 kubelet[2612]: I0310 01:35:01.044052 2612 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:35:01.045092 kubelet[2612]: I0310 01:35:01.044235 2612 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:35:01.045092 kubelet[2612]: I0310 01:35:01.044403 2612 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:35:01.047726 kubelet[2612]: I0310 01:35:01.047414 2612 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:35:01.061711 kubelet[2612]: I0310 01:35:01.061396 2612 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:35:01.148651 kubelet[2612]: I0310 01:35:01.136328 2612 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:35:01.199633 kubelet[2612]: E0310 01:35:01.199390 2612 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:35:01.199633 kubelet[2612]: I0310 01:35:01.199618 2612 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:35:01.256882 kubelet[2612]: I0310 01:35:01.248973 2612 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:35:01.256882 kubelet[2612]: I0310 01:35:01.249775 2612 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:35:01.256882 kubelet[2612]: I0310 01:35:01.250081 2612 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:35:01.256882 kubelet[2612]: I0310 01:35:01.250389 2612 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.250401 2612 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.250433 2612 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.252110 2612 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.253994 2612 kubelet.go:475] "Attempting to sync node with API server" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.254086 2612 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.254118 2612 kubelet.go:387] "Adding apiserver pod source" Mar 10 01:35:01.262324 kubelet[2612]: I0310 01:35:01.254140 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:35:01.271702 kubelet[2612]: I0310 01:35:01.271133 2612 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:35:01.275082 kubelet[2612]: I0310 01:35:01.272391 2612 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:35:01.278989 kubelet[2612]: I0310 01:35:01.277111 2612 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:35:01.303675 kubelet[2612]: I0310 01:35:01.298728 2612 server.go:1262] "Started kubelet" Mar 10 01:35:01.303675 kubelet[2612]: I0310 01:35:01.301358 2612 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:35:01.303675 kubelet[2612]: I0310 01:35:01.301425 2612 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:35:01.303675 kubelet[2612]: I0310 01:35:01.302249 2612 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:35:01.303675 kubelet[2612]: I0310 01:35:01.302312 2612 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:35:01.317658 kubelet[2612]: I0310 01:35:01.312995 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:35:01.326649 kubelet[2612]: I0310 01:35:01.326620 2612 server.go:310] "Adding debug handlers to kubelet server" Mar 10 01:35:01.333345 kubelet[2612]: I0310 01:35:01.333015 2612 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:35:01.334372 kubelet[2612]: I0310 01:35:01.334349 2612 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 01:35:01.334674 kubelet[2612]: I0310 01:35:01.334655 2612 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:35:01.334999 kubelet[2612]: I0310 01:35:01.334982 2612 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:35:01.339143 kubelet[2612]: I0310 01:35:01.339105 2612 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:35:01.341754 kubelet[2612]: E0310 01:35:01.341590 2612 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:35:01.342980 kubelet[2612]: I0310 01:35:01.342668 2612 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:35:01.360441 kubelet[2612]: I0310 01:35:01.360350 2612 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:35:01.450457 sudo[2640]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 10 01:35:01.460731 sudo[2640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 10 01:35:01.481882 kubelet[2612]: I0310 01:35:01.481764 2612 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:35:01.493409 kubelet[2612]: I0310 01:35:01.492196 2612 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:35:01.493409 kubelet[2612]: I0310 01:35:01.492233 2612 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 01:35:01.493409 kubelet[2612]: I0310 01:35:01.492313 2612 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 01:35:01.493409 kubelet[2612]: E0310 01:35:01.492380 2612 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:35:01.562381 kubelet[2612]: I0310 01:35:01.562195 2612 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:35:01.562381 kubelet[2612]: I0310 01:35:01.562276 2612 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:35:01.562381 kubelet[2612]: I0310 01:35:01.562307 2612 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:35:01.562738 kubelet[2612]: I0310 01:35:01.562483 2612 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 01:35:01.562738 kubelet[2612]: I0310 01:35:01.562499 2612 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 01:35:01.562738 kubelet[2612]: I0310 01:35:01.562629 2612 policy_none.go:49] "None policy: Start" Mar 10 01:35:01.562738 kubelet[2612]: I0310 01:35:01.562645 2612 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:35:01.562738 kubelet[2612]: I0310 01:35:01.562664 2612 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:35:01.562993 kubelet[2612]: I0310 01:35:01.562848 2612 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 01:35:01.562993 kubelet[2612]: I0310 01:35:01.562863 2612 policy_none.go:47] "Start" Mar 10 01:35:01.575981 kubelet[2612]: E0310 01:35:01.575734 2612 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:35:01.581762 kubelet[2612]: I0310 01:35:01.581649 2612 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:35:01.581762 kubelet[2612]: I0310 01:35:01.581728 2612 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:35:01.582623 kubelet[2612]: I0310 01:35:01.582375 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:35:01.586087 kubelet[2612]: E0310 01:35:01.585303 2612 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:35:01.597611 kubelet[2612]: I0310 01:35:01.593423 2612 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:01.597611 kubelet[2612]: I0310 01:35:01.593446 2612 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:35:01.597611 kubelet[2612]: I0310 01:35:01.594016 2612 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:35:01.644208 kubelet[2612]: I0310 01:35:01.642677 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:35:01.644208 kubelet[2612]: I0310 01:35:01.642898 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:35:01.644208 kubelet[2612]: I0310 01:35:01.643018 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:35:01.644208 kubelet[2612]: I0310 01:35:01.643049 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:35:01.644208 kubelet[2612]: I0310 01:35:01.643075 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:35:01.650195 kubelet[2612]: I0310 01:35:01.643095 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67878fe3bf5562691b14866aa164ca85-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"67878fe3bf5562691b14866aa164ca85\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:01.650195 kubelet[2612]: I0310 01:35:01.643123 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:35:01.650195 kubelet[2612]: I0310 01:35:01.643159 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67878fe3bf5562691b14866aa164ca85-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"67878fe3bf5562691b14866aa164ca85\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:01.650195 kubelet[2612]: I0310 01:35:01.643183 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67878fe3bf5562691b14866aa164ca85-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"67878fe3bf5562691b14866aa164ca85\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:01.650195 kubelet[2612]: E0310 01:35:01.648873 2612 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:01.653656 kubelet[2612]: E0310 01:35:01.652254 2612 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 01:35:01.725230 kubelet[2612]: I0310 01:35:01.725193 2612 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:35:01.765383 kubelet[2612]: I0310 01:35:01.765293 2612 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 01:35:01.765688 kubelet[2612]: I0310 01:35:01.765619 2612 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:35:01.956494 kubelet[2612]: E0310 01:35:01.953205 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:01.956494 kubelet[2612]: E0310 01:35:01.953729 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:01.959939 kubelet[2612]: E0310 01:35:01.959629 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:02.304925 kubelet[2612]: I0310 01:35:02.302935 2612 apiserver.go:52] "Watching apiserver" Mar 10 01:35:02.561732 kubelet[2612]: I0310 01:35:02.560032 2612 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:02.572423 kubelet[2612]: E0310 01:35:02.571931 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:02.576036 kubelet[2612]: E0310 01:35:02.575166 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:02.616265 kubelet[2612]: E0310 01:35:02.615961 2612 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:35:02.625252 kubelet[2612]: E0310 01:35:02.625139 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:02.635998 kubelet[2612]: I0310 01:35:02.635629 2612 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:35:02.939086 kubelet[2612]: I0310 01:35:02.931322 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.931302912 podStartE2EDuration="1.931302912s" podCreationTimestamp="2026-03-10 01:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:35:02.887966677 +0000 UTC m=+2.071247185" watchObservedRunningTime="2026-03-10 01:35:02.931302912 +0000 UTC m=+2.114583390" Mar 10 01:35:03.583964 kubelet[2612]: E0310 01:35:03.580726 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:03.583964 kubelet[2612]: E0310 01:35:03.582085 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:04.738080 kubelet[2612]: E0310 01:35:04.736658 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:05.264920 kubelet[2612]: I0310 01:35:05.258969 2612 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:35:05.264920 kubelet[2612]: I0310 01:35:05.260925 2612 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:35:05.267209 containerd[1456]: time="2026-03-10T01:35:05.260295392Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:35:05.664649 kubelet[2612]: I0310 01:35:05.659345 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.659324766 podStartE2EDuration="6.659324766s" podCreationTimestamp="2026-03-10 01:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:35:02.938657895 +0000 UTC m=+2.121938615" watchObservedRunningTime="2026-03-10 01:35:05.659324766 +0000 UTC m=+4.842605285" Mar 10 01:35:05.780414 kubelet[2612]: E0310 01:35:05.779360 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:05.811660 systemd[1]: Created slice kubepods-besteffort-pod6a1ad710_92c6_4378_82ba_333a405d5c15.slice - libcontainer container kubepods-besteffort-pod6a1ad710_92c6_4378_82ba_333a405d5c15.slice. Mar 10 01:35:05.911975 kubelet[2612]: I0310 01:35:05.903155 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a1ad710-92c6-4378-82ba-333a405d5c15-xtables-lock\") pod \"kube-proxy-b477f\" (UID: \"6a1ad710-92c6-4378-82ba-333a405d5c15\") " pod="kube-system/kube-proxy-b477f" Mar 10 01:35:05.977159 kubelet[2612]: I0310 01:35:05.959887 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsfg4\" (UniqueName: \"kubernetes.io/projected/6a1ad710-92c6-4378-82ba-333a405d5c15-kube-api-access-gsfg4\") pod \"kube-proxy-b477f\" (UID: \"6a1ad710-92c6-4378-82ba-333a405d5c15\") " pod="kube-system/kube-proxy-b477f" Mar 10 01:35:05.977159 kubelet[2612]: I0310 01:35:05.960296 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a1ad710-92c6-4378-82ba-333a405d5c15-kube-proxy\") pod \"kube-proxy-b477f\" (UID: \"6a1ad710-92c6-4378-82ba-333a405d5c15\") " pod="kube-system/kube-proxy-b477f" Mar 10 01:35:05.977159 kubelet[2612]: I0310 01:35:05.960344 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a1ad710-92c6-4378-82ba-333a405d5c15-lib-modules\") pod \"kube-proxy-b477f\" (UID: \"6a1ad710-92c6-4378-82ba-333a405d5c15\") " pod="kube-system/kube-proxy-b477f" Mar 10 01:35:06.216666 sudo[2640]: pam_unix(sudo:session): session closed for user root Mar 10 01:35:07.047624 kubelet[2612]: E0310 01:35:07.044281 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:07.919638 kubelet[2612]: E0310 01:35:07.918421 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:07.958166 containerd[1456]: time="2026-03-10T01:35:07.955364148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b477f,Uid:6a1ad710-92c6-4378-82ba-333a405d5c15,Namespace:kube-system,Attempt:0,}" Mar 10 01:35:07.979707 kubelet[2612]: E0310 01:35:07.978320 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:08.381911 containerd[1456]: time="2026-03-10T01:35:08.381322285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:35:08.392669 containerd[1456]: time="2026-03-10T01:35:08.392450149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:35:08.393175 containerd[1456]: time="2026-03-10T01:35:08.392999865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:35:08.393743 containerd[1456]: time="2026-03-10T01:35:08.393696687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:35:08.941416 systemd[1]: run-containerd-runc-k8s.io-41e88f3015d35274afefad0d28fbbb138c6a181cbbf21a52db774bd3f3523ad5-runc.Td0ugZ.mount: Deactivated successfully. Mar 10 01:35:08.977292 systemd[1]: Started cri-containerd-41e88f3015d35274afefad0d28fbbb138c6a181cbbf21a52db774bd3f3523ad5.scope - libcontainer container 41e88f3015d35274afefad0d28fbbb138c6a181cbbf21a52db774bd3f3523ad5. Mar 10 01:35:09.147113 kubelet[2612]: E0310 01:35:09.146629 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:09.399888 containerd[1456]: time="2026-03-10T01:35:09.398452339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b477f,Uid:6a1ad710-92c6-4378-82ba-333a405d5c15,Namespace:kube-system,Attempt:0,} returns sandbox id \"41e88f3015d35274afefad0d28fbbb138c6a181cbbf21a52db774bd3f3523ad5\"" Mar 10 01:35:09.405207 kubelet[2612]: E0310 01:35:09.404671 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:09.556874 containerd[1456]: time="2026-03-10T01:35:09.517722163Z" level=info msg="CreateContainer within sandbox \"41e88f3015d35274afefad0d28fbbb138c6a181cbbf21a52db774bd3f3523ad5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:35:09.712456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518992462.mount: Deactivated successfully. Mar 10 01:35:09.734199 containerd[1456]: time="2026-03-10T01:35:09.732209037Z" level=info msg="CreateContainer within sandbox \"41e88f3015d35274afefad0d28fbbb138c6a181cbbf21a52db774bd3f3523ad5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5dab4f3d6594bbbbfaff892c07996c89b222619fe8a4de081adad77a88f757ea\"" Mar 10 01:35:09.735312 containerd[1456]: time="2026-03-10T01:35:09.734991677Z" level=info msg="StartContainer for \"5dab4f3d6594bbbbfaff892c07996c89b222619fe8a4de081adad77a88f757ea\"" Mar 10 01:35:09.970646 systemd[1]: Started cri-containerd-5dab4f3d6594bbbbfaff892c07996c89b222619fe8a4de081adad77a88f757ea.scope - libcontainer container 5dab4f3d6594bbbbfaff892c07996c89b222619fe8a4de081adad77a88f757ea. Mar 10 01:35:10.131773 kubelet[2612]: E0310 01:35:10.124390 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:11.242040 kubelet[2612]: I0310 01:35:11.240001 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-etc-cni-netd\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.242040 kubelet[2612]: I0310 01:35:11.240094 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-kernel\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.242040 kubelet[2612]: I0310 01:35:11.240120 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-run\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.242040 kubelet[2612]: I0310 01:35:11.240140 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-bpf-maps\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.242040 kubelet[2612]: I0310 01:35:11.240161 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hostproc\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.242040 kubelet[2612]: I0310 01:35:11.240181 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-cgroup\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.247046 kubelet[2612]: I0310 01:35:11.240200 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cni-path\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.247046 kubelet[2612]: I0310 01:35:11.240218 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-lib-modules\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.247046 kubelet[2612]: I0310 01:35:11.240237 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01ef80d5-fbce-4009-bc6a-86a4ac82a706-clustermesh-secrets\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.247046 kubelet[2612]: I0310 01:35:11.240261 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-config-path\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.247046 kubelet[2612]: I0310 01:35:11.240283 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-net\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.247046 kubelet[2612]: I0310 01:35:11.240305 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hubble-tls\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.248168 kubelet[2612]: I0310 01:35:11.240329 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-xtables-lock\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.248168 kubelet[2612]: I0310 01:35:11.240349 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vmx\" (UniqueName: \"kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-kube-api-access-j7vmx\") pod \"cilium-9gr8j\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " pod="kube-system/cilium-9gr8j" Mar 10 01:35:11.271711 systemd[1]: Created slice kubepods-burstable-pod01ef80d5_fbce_4009_bc6a_86a4ac82a706.slice - libcontainer container kubepods-burstable-pod01ef80d5_fbce_4009_bc6a_86a4ac82a706.slice. Mar 10 01:35:11.605013 containerd[1456]: time="2026-03-10T01:35:11.604756468Z" level=info msg="StartContainer for \"5dab4f3d6594bbbbfaff892c07996c89b222619fe8a4de081adad77a88f757ea\" returns successfully" Mar 10 01:35:11.623354 systemd[1]: Created slice kubepods-besteffort-poda3f880b4_bcb2_44b1_b2fb_16ea234180ea.slice - libcontainer container kubepods-besteffort-poda3f880b4_bcb2_44b1_b2fb_16ea234180ea.slice. Mar 10 01:35:11.700610 kubelet[2612]: I0310 01:35:11.700135 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8g78\" (UniqueName: \"kubernetes.io/projected/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-kube-api-access-c8g78\") pod \"cilium-operator-6f9c7c5859-jzzk2\" (UID: \"a3f880b4-bcb2-44b1-b2fb-16ea234180ea\") " pod="kube-system/cilium-operator-6f9c7c5859-jzzk2" Mar 10 01:35:11.700610 kubelet[2612]: I0310 01:35:11.700187 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-jzzk2\" (UID: \"a3f880b4-bcb2-44b1-b2fb-16ea234180ea\") " pod="kube-system/cilium-operator-6f9c7c5859-jzzk2" Mar 10 01:35:11.896196 kubelet[2612]: E0310 01:35:11.894375 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:11.896645 containerd[1456]: time="2026-03-10T01:35:11.895326567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gr8j,Uid:01ef80d5-fbce-4009-bc6a-86a4ac82a706,Namespace:kube-system,Attempt:0,}" Mar 10 01:35:12.001806 kubelet[2612]: E0310 01:35:11.970911 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:12.053053 containerd[1456]: time="2026-03-10T01:35:11.975762863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-jzzk2,Uid:a3f880b4-bcb2-44b1-b2fb-16ea234180ea,Namespace:kube-system,Attempt:0,}" Mar 10 01:35:12.250367 containerd[1456]: time="2026-03-10T01:35:12.243101210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:35:12.250367 containerd[1456]: time="2026-03-10T01:35:12.244938182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:35:12.250367 containerd[1456]: time="2026-03-10T01:35:12.245058647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:35:12.250367 containerd[1456]: time="2026-03-10T01:35:12.248651091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:35:12.310897 containerd[1456]: time="2026-03-10T01:35:12.306446899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:35:12.310897 containerd[1456]: time="2026-03-10T01:35:12.306668323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:35:12.310897 containerd[1456]: time="2026-03-10T01:35:12.306912940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:35:12.310897 containerd[1456]: time="2026-03-10T01:35:12.308743085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:35:12.318003 kubelet[2612]: E0310 01:35:12.314494 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:12.586066 systemd[1]: Started cri-containerd-30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18.scope - libcontainer container 30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18. Mar 10 01:35:12.656172 systemd[1]: Started cri-containerd-1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011.scope - libcontainer container 1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011. Mar 10 01:35:12.663815 kubelet[2612]: E0310 01:35:12.663746 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:12.761057 kubelet[2612]: I0310 01:35:12.754208 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b477f" podStartSLOduration=7.754186986 podStartE2EDuration="7.754186986s" podCreationTimestamp="2026-03-10 01:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:35:12.400950939 +0000 UTC m=+11.584231419" watchObservedRunningTime="2026-03-10 01:35:12.754186986 +0000 UTC m=+11.937467466" Mar 10 01:35:12.902616 containerd[1456]: time="2026-03-10T01:35:12.899801551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gr8j,Uid:01ef80d5-fbce-4009-bc6a-86a4ac82a706,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\"" Mar 10 01:35:12.906009 kubelet[2612]: E0310 01:35:12.905252 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:12.912788 containerd[1456]: time="2026-03-10T01:35:12.912119997Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 10 01:35:13.027476 containerd[1456]: time="2026-03-10T01:35:13.027374601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-jzzk2,Uid:a3f880b4-bcb2-44b1-b2fb-16ea234180ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18\"" Mar 10 01:35:13.033214 kubelet[2612]: E0310 01:35:13.031357 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:13.474084 kubelet[2612]: E0310 01:35:13.473219 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:13.474084 kubelet[2612]: E0310 01:35:13.473372 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:30.185392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329131388.mount: Deactivated successfully. Mar 10 01:35:40.293458 containerd[1456]: time="2026-03-10T01:35:40.290747621Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:35:40.293458 containerd[1456]: time="2026-03-10T01:35:40.292456089Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 10 01:35:40.298080 containerd[1456]: time="2026-03-10T01:35:40.297815013Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:35:40.306080 containerd[1456]: time="2026-03-10T01:35:40.305684244Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 27.393509615s" Mar 10 01:35:40.306080 containerd[1456]: time="2026-03-10T01:35:40.305744797Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 10 01:35:40.309178 containerd[1456]: time="2026-03-10T01:35:40.309056999Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 10 01:35:40.317248 containerd[1456]: time="2026-03-10T01:35:40.317000128Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:35:40.378313 containerd[1456]: time="2026-03-10T01:35:40.377958649Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\"" Mar 10 01:35:40.383116 containerd[1456]: time="2026-03-10T01:35:40.382116299Z" level=info msg="StartContainer for \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\"" Mar 10 01:35:40.495850 systemd[1]: Started cri-containerd-afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7.scope - libcontainer container afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7. Mar 10 01:35:40.575887 containerd[1456]: time="2026-03-10T01:35:40.575472903Z" level=info msg="StartContainer for \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\" returns successfully" Mar 10 01:35:40.612374 systemd[1]: cri-containerd-afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7.scope: Deactivated successfully. Mar 10 01:35:40.918403 kubelet[2612]: E0310 01:35:40.917870 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:40.942854 containerd[1456]: time="2026-03-10T01:35:40.942234772Z" level=info msg="shim disconnected" id=afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7 namespace=k8s.io Mar 10 01:35:40.942854 containerd[1456]: time="2026-03-10T01:35:40.942313830Z" level=warning msg="cleaning up after shim disconnected" id=afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7 namespace=k8s.io Mar 10 01:35:40.942854 containerd[1456]: time="2026-03-10T01:35:40.942333676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:35:41.352809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7-rootfs.mount: Deactivated successfully. Mar 10 01:35:42.248342 kubelet[2612]: E0310 01:35:42.238874 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:42.538406 containerd[1456]: time="2026-03-10T01:35:42.537055929Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:35:42.787805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445624936.mount: Deactivated successfully. Mar 10 01:35:42.797118 containerd[1456]: time="2026-03-10T01:35:42.796845400Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\"" Mar 10 01:35:42.800587 containerd[1456]: time="2026-03-10T01:35:42.800393838Z" level=info msg="StartContainer for \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\"" Mar 10 01:35:42.912248 systemd[1]: Started cri-containerd-603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526.scope - libcontainer container 603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526. Mar 10 01:35:43.021658 containerd[1456]: time="2026-03-10T01:35:43.020325095Z" level=info msg="StartContainer for \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\" returns successfully" Mar 10 01:35:43.072234 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:35:43.072719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:35:43.073135 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:35:43.085164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:35:43.085786 systemd[1]: cri-containerd-603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526.scope: Deactivated successfully. Mar 10 01:35:43.133014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526-rootfs.mount: Deactivated successfully. Mar 10 01:35:43.231716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:35:43.236626 containerd[1456]: time="2026-03-10T01:35:43.236168317Z" level=info msg="shim disconnected" id=603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526 namespace=k8s.io Mar 10 01:35:43.236626 containerd[1456]: time="2026-03-10T01:35:43.236271530Z" level=warning msg="cleaning up after shim disconnected" id=603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526 namespace=k8s.io Mar 10 01:35:43.236626 containerd[1456]: time="2026-03-10T01:35:43.236289283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:35:43.241768 kubelet[2612]: E0310 01:35:43.241370 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:43.380735 containerd[1456]: time="2026-03-10T01:35:43.378790431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:35:43.382501 containerd[1456]: time="2026-03-10T01:35:43.382359222Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 10 01:35:43.384745 containerd[1456]: time="2026-03-10T01:35:43.384661115Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:35:43.388777 containerd[1456]: time="2026-03-10T01:35:43.387801474Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.078697818s" Mar 10 01:35:43.388777 containerd[1456]: time="2026-03-10T01:35:43.387846318Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 10 01:35:43.410388 containerd[1456]: time="2026-03-10T01:35:43.408363087Z" level=info msg="CreateContainer within sandbox \"30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 10 01:35:43.449614 containerd[1456]: time="2026-03-10T01:35:43.449392826Z" level=info msg="CreateContainer within sandbox \"30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\"" Mar 10 01:35:43.456564 containerd[1456]: time="2026-03-10T01:35:43.452467734Z" level=info msg="StartContainer for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\"" Mar 10 01:35:43.513213 systemd[1]: Started cri-containerd-f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99.scope - libcontainer container f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99. Mar 10 01:35:43.569593 containerd[1456]: time="2026-03-10T01:35:43.569305115Z" level=info msg="StartContainer for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" returns successfully" Mar 10 01:35:44.343148 kubelet[2612]: E0310 01:35:44.342253 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:44.399449 kubelet[2612]: E0310 01:35:44.397828 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:44.457661 kubelet[2612]: I0310 01:35:44.456329 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-jzzk2" podStartSLOduration=3.105073295 podStartE2EDuration="33.456299115s" podCreationTimestamp="2026-03-10 01:35:11 +0000 UTC" firstStartedPulling="2026-03-10 01:35:13.037756821 +0000 UTC m=+12.221037301" lastFinishedPulling="2026-03-10 01:35:43.388982642 +0000 UTC m=+42.572263121" observedRunningTime="2026-03-10 01:35:44.452743273 +0000 UTC m=+43.636023782" watchObservedRunningTime="2026-03-10 01:35:44.456299115 +0000 UTC m=+43.639579594" Mar 10 01:35:44.485179 containerd[1456]: time="2026-03-10T01:35:44.483316734Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:35:44.612876 containerd[1456]: time="2026-03-10T01:35:44.611716202Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\"" Mar 10 01:35:44.628317 containerd[1456]: time="2026-03-10T01:35:44.618496849Z" level=info msg="StartContainer for \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\"" Mar 10 01:35:44.840048 systemd[1]: Started cri-containerd-a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710.scope - libcontainer container a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710. Mar 10 01:35:44.998346 systemd[1]: cri-containerd-a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710.scope: Deactivated successfully. Mar 10 01:35:45.001085 containerd[1456]: time="2026-03-10T01:35:45.000754803Z" level=info msg="StartContainer for \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\" returns successfully" Mar 10 01:35:45.125194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710-rootfs.mount: Deactivated successfully. Mar 10 01:35:45.148094 containerd[1456]: time="2026-03-10T01:35:45.147843042Z" level=info msg="shim disconnected" id=a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710 namespace=k8s.io Mar 10 01:35:45.148094 containerd[1456]: time="2026-03-10T01:35:45.148053867Z" level=warning msg="cleaning up after shim disconnected" id=a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710 namespace=k8s.io Mar 10 01:35:45.148094 containerd[1456]: time="2026-03-10T01:35:45.148077372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:35:45.408625 kubelet[2612]: E0310 01:35:45.406002 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:45.416988 kubelet[2612]: E0310 01:35:45.416879 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:45.430465 containerd[1456]: time="2026-03-10T01:35:45.430411101Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:35:45.491007 containerd[1456]: time="2026-03-10T01:35:45.490859368Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\"" Mar 10 01:35:45.491952 containerd[1456]: time="2026-03-10T01:35:45.491862471Z" level=info msg="StartContainer for \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\"" Mar 10 01:35:45.582849 systemd[1]: Started cri-containerd-981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3.scope - libcontainer container 981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3. Mar 10 01:35:45.637867 systemd[1]: cri-containerd-981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3.scope: Deactivated successfully. Mar 10 01:35:45.661170 containerd[1456]: time="2026-03-10T01:35:45.654751281Z" level=info msg="StartContainer for \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\" returns successfully" Mar 10 01:35:45.754614 containerd[1456]: time="2026-03-10T01:35:45.752497430Z" level=info msg="shim disconnected" id=981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3 namespace=k8s.io Mar 10 01:35:45.754614 containerd[1456]: time="2026-03-10T01:35:45.752767085Z" level=warning msg="cleaning up after shim disconnected" id=981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3 namespace=k8s.io Mar 10 01:35:45.754614 containerd[1456]: time="2026-03-10T01:35:45.752780670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:35:46.417844 kubelet[2612]: E0310 01:35:46.417029 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:46.442069 containerd[1456]: time="2026-03-10T01:35:46.441019231Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:35:46.490413 containerd[1456]: time="2026-03-10T01:35:46.490266538Z" level=info msg="CreateContainer within sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\"" Mar 10 01:35:46.494078 containerd[1456]: time="2026-03-10T01:35:46.493984769Z" level=info msg="StartContainer for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\"" Mar 10 01:35:46.559792 systemd[1]: Started cri-containerd-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40.scope - libcontainer container 79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40. Mar 10 01:35:46.619808 containerd[1456]: time="2026-03-10T01:35:46.619137277Z" level=info msg="StartContainer for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" returns successfully" Mar 10 01:35:46.782007 systemd[1]: run-containerd-runc-k8s.io-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40-runc.l0L9Zv.mount: Deactivated successfully. Mar 10 01:35:46.979626 kubelet[2612]: I0310 01:35:46.977973 2612 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 10 01:35:47.118389 systemd[1]: Created slice kubepods-burstable-pod9b840a6c_6faf_42ed_8185_34879af4f7fa.slice - libcontainer container kubepods-burstable-pod9b840a6c_6faf_42ed_8185_34879af4f7fa.slice. Mar 10 01:35:47.130614 systemd[1]: Created slice kubepods-burstable-poddd8bcadd_1a89_4a9f_a54f_d9b2fdca5f21.slice - libcontainer container kubepods-burstable-poddd8bcadd_1a89_4a9f_a54f_d9b2fdca5f21.slice. Mar 10 01:35:47.164643 kubelet[2612]: I0310 01:35:47.164465 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgf4d\" (UniqueName: \"kubernetes.io/projected/9b840a6c-6faf-42ed-8185-34879af4f7fa-kube-api-access-vgf4d\") pod \"coredns-66bc5c9577-zvjft\" (UID: \"9b840a6c-6faf-42ed-8185-34879af4f7fa\") " pod="kube-system/coredns-66bc5c9577-zvjft" Mar 10 01:35:47.165597 kubelet[2612]: I0310 01:35:47.165445 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd8bcadd-1a89-4a9f-a54f-d9b2fdca5f21-config-volume\") pod \"coredns-66bc5c9577-rdwtc\" (UID: \"dd8bcadd-1a89-4a9f-a54f-d9b2fdca5f21\") " pod="kube-system/coredns-66bc5c9577-rdwtc" Mar 10 01:35:47.167381 kubelet[2612]: I0310 01:35:47.166687 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b840a6c-6faf-42ed-8185-34879af4f7fa-config-volume\") pod \"coredns-66bc5c9577-zvjft\" (UID: \"9b840a6c-6faf-42ed-8185-34879af4f7fa\") " pod="kube-system/coredns-66bc5c9577-zvjft" Mar 10 01:35:47.168587 kubelet[2612]: I0310 01:35:47.167641 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnc4n\" (UniqueName: \"kubernetes.io/projected/dd8bcadd-1a89-4a9f-a54f-d9b2fdca5f21-kube-api-access-vnc4n\") pod \"coredns-66bc5c9577-rdwtc\" (UID: \"dd8bcadd-1a89-4a9f-a54f-d9b2fdca5f21\") " pod="kube-system/coredns-66bc5c9577-rdwtc" Mar 10 01:35:47.437322 kubelet[2612]: E0310 01:35:47.437039 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:47.441265 kubelet[2612]: E0310 01:35:47.441152 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:47.447813 kubelet[2612]: E0310 01:35:47.447391 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:47.468404 containerd[1456]: time="2026-03-10T01:35:47.468258120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zvjft,Uid:9b840a6c-6faf-42ed-8185-34879af4f7fa,Namespace:kube-system,Attempt:0,}" Mar 10 01:35:47.477377 containerd[1456]: time="2026-03-10T01:35:47.477275148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rdwtc,Uid:dd8bcadd-1a89-4a9f-a54f-d9b2fdca5f21,Namespace:kube-system,Attempt:0,}" Mar 10 01:35:47.491715 kubelet[2612]: I0310 01:35:47.489055 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9gr8j" podStartSLOduration=10.092692373 podStartE2EDuration="37.489036704s" podCreationTimestamp="2026-03-10 01:35:10 +0000 UTC" firstStartedPulling="2026-03-10 01:35:12.911625443 +0000 UTC m=+12.094905921" lastFinishedPulling="2026-03-10 01:35:40.307969774 +0000 UTC m=+39.491250252" observedRunningTime="2026-03-10 01:35:47.488465605 +0000 UTC m=+46.671746134" watchObservedRunningTime="2026-03-10 01:35:47.489036704 +0000 UTC m=+46.672317183" Mar 10 01:35:48.463360 kubelet[2612]: E0310 01:35:48.462699 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:49.273950 systemd-networkd[1380]: cilium_host: Link UP Mar 10 01:35:49.274283 systemd-networkd[1380]: cilium_net: Link UP Mar 10 01:35:49.274292 systemd-networkd[1380]: cilium_net: Gained carrier Mar 10 01:35:49.275970 systemd-networkd[1380]: cilium_host: Gained carrier Mar 10 01:35:49.371392 systemd-networkd[1380]: cilium_host: Gained IPv6LL Mar 10 01:35:49.472820 kubelet[2612]: E0310 01:35:49.471277 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:49.576391 systemd-networkd[1380]: cilium_vxlan: Link UP Mar 10 01:35:49.576736 systemd-networkd[1380]: cilium_vxlan: Gained carrier Mar 10 01:35:49.964791 kernel: NET: Registered PF_ALG protocol family Mar 10 01:35:50.201433 systemd-networkd[1380]: cilium_net: Gained IPv6LL Mar 10 01:35:51.426749 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Mar 10 01:35:53.239134 systemd-networkd[1380]: lxc_health: Link UP Mar 10 01:35:53.396388 systemd-networkd[1380]: lxc_health: Gained carrier Mar 10 01:35:53.897848 kubelet[2612]: E0310 01:35:53.894393 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:53.924365 systemd-networkd[1380]: lxc31ed1915a1a5: Link UP Mar 10 01:35:53.935623 kernel: eth0: renamed from tmpc8a81 Mar 10 01:35:53.940692 systemd-networkd[1380]: lxcf7c8f0dbae63: Link UP Mar 10 01:35:53.968666 kernel: eth0: renamed from tmpbf047 Mar 10 01:35:53.974602 systemd-networkd[1380]: lxc31ed1915a1a5: Gained carrier Mar 10 01:35:53.982040 systemd-networkd[1380]: lxcf7c8f0dbae63: Gained carrier Mar 10 01:35:54.533155 kubelet[2612]: E0310 01:35:54.532308 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:54.691467 systemd-networkd[1380]: lxc_health: Gained IPv6LL Mar 10 01:35:55.460304 systemd-networkd[1380]: lxc31ed1915a1a5: Gained IPv6LL Mar 10 01:35:55.538695 kubelet[2612]: E0310 01:35:55.535036 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:55.650234 systemd-networkd[1380]: lxcf7c8f0dbae63: Gained IPv6LL Mar 10 01:35:55.829251 systemd[1]: run-containerd-runc-k8s.io-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40-runc.zcyNqe.mount: Deactivated successfully. Mar 10 01:36:00.342497 containerd[1456]: time="2026-03-10T01:36:00.324802707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:36:00.342497 containerd[1456]: time="2026-03-10T01:36:00.325066019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:36:00.342497 containerd[1456]: time="2026-03-10T01:36:00.325097077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:36:00.342497 containerd[1456]: time="2026-03-10T01:36:00.325435349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:36:00.398800 containerd[1456]: time="2026-03-10T01:36:00.396626336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:36:00.398800 containerd[1456]: time="2026-03-10T01:36:00.396712617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:36:00.398800 containerd[1456]: time="2026-03-10T01:36:00.396730120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:36:00.398800 containerd[1456]: time="2026-03-10T01:36:00.397000575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:36:00.490040 systemd[1]: Started cri-containerd-c8a81788bb1e34d028ed4f1d8f6bb8ede50774cbccc77eaccfdda461a9d04aa1.scope - libcontainer container c8a81788bb1e34d028ed4f1d8f6bb8ede50774cbccc77eaccfdda461a9d04aa1. Mar 10 01:36:00.532780 systemd[1]: Started cri-containerd-bf0474933c4c75473b8538cc602d43e51c25a23386833cd303c37d1346da843d.scope - libcontainer container bf0474933c4c75473b8538cc602d43e51c25a23386833cd303c37d1346da843d. Mar 10 01:36:00.545470 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:36:00.576153 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:36:00.630436 containerd[1456]: time="2026-03-10T01:36:00.629718701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zvjft,Uid:9b840a6c-6faf-42ed-8185-34879af4f7fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8a81788bb1e34d028ed4f1d8f6bb8ede50774cbccc77eaccfdda461a9d04aa1\"" Mar 10 01:36:00.635700 containerd[1456]: time="2026-03-10T01:36:00.635651704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rdwtc,Uid:dd8bcadd-1a89-4a9f-a54f-d9b2fdca5f21,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf0474933c4c75473b8538cc602d43e51c25a23386833cd303c37d1346da843d\"" Mar 10 01:36:00.640292 kubelet[2612]: E0310 01:36:00.639032 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:00.640292 kubelet[2612]: E0310 01:36:00.639427 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:00.668906 containerd[1456]: time="2026-03-10T01:36:00.668845995Z" level=info msg="CreateContainer within sandbox \"c8a81788bb1e34d028ed4f1d8f6bb8ede50774cbccc77eaccfdda461a9d04aa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:36:00.677466 containerd[1456]: time="2026-03-10T01:36:00.677317735Z" level=info msg="CreateContainer within sandbox \"bf0474933c4c75473b8538cc602d43e51c25a23386833cd303c37d1346da843d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:36:00.838430 containerd[1456]: time="2026-03-10T01:36:00.838167049Z" level=info msg="CreateContainer within sandbox \"c8a81788bb1e34d028ed4f1d8f6bb8ede50774cbccc77eaccfdda461a9d04aa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67d1150386cbab79a9ff5e152ea620cb80319522686391fc6fa3c83b16507655\"" Mar 10 01:36:00.842303 containerd[1456]: time="2026-03-10T01:36:00.839957186Z" level=info msg="StartContainer for \"67d1150386cbab79a9ff5e152ea620cb80319522686391fc6fa3c83b16507655\"" Mar 10 01:36:00.876040 containerd[1456]: time="2026-03-10T01:36:00.875887231Z" level=info msg="CreateContainer within sandbox \"bf0474933c4c75473b8538cc602d43e51c25a23386833cd303c37d1346da843d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87f831c62e636eb3e85d48c598dc523be0cba4d20b0828350044ecf4b63c22db\"" Mar 10 01:36:00.879118 containerd[1456]: time="2026-03-10T01:36:00.877443414Z" level=info msg="StartContainer for \"87f831c62e636eb3e85d48c598dc523be0cba4d20b0828350044ecf4b63c22db\"" Mar 10 01:36:00.932203 systemd[1]: Started cri-containerd-67d1150386cbab79a9ff5e152ea620cb80319522686391fc6fa3c83b16507655.scope - libcontainer container 67d1150386cbab79a9ff5e152ea620cb80319522686391fc6fa3c83b16507655. Mar 10 01:36:00.978936 systemd[1]: Started cri-containerd-87f831c62e636eb3e85d48c598dc523be0cba4d20b0828350044ecf4b63c22db.scope - libcontainer container 87f831c62e636eb3e85d48c598dc523be0cba4d20b0828350044ecf4b63c22db. Mar 10 01:36:01.119350 containerd[1456]: time="2026-03-10T01:36:01.118814714Z" level=info msg="StartContainer for \"87f831c62e636eb3e85d48c598dc523be0cba4d20b0828350044ecf4b63c22db\" returns successfully" Mar 10 01:36:01.120074 containerd[1456]: time="2026-03-10T01:36:01.119805407Z" level=info msg="StartContainer for \"67d1150386cbab79a9ff5e152ea620cb80319522686391fc6fa3c83b16507655\" returns successfully" Mar 10 01:36:01.381033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885880979.mount: Deactivated successfully. Mar 10 01:36:01.579939 kubelet[2612]: E0310 01:36:01.579842 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:01.590567 kubelet[2612]: E0310 01:36:01.589057 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:01.618281 kubelet[2612]: I0310 01:36:01.617690 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rdwtc" podStartSLOduration=56.617667885 podStartE2EDuration="56.617667885s" podCreationTimestamp="2026-03-10 01:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:36:01.614136783 +0000 UTC m=+60.797417292" watchObservedRunningTime="2026-03-10 01:36:01.617667885 +0000 UTC m=+60.800948374" Mar 10 01:36:01.678367 kubelet[2612]: I0310 01:36:01.677334 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zvjft" podStartSLOduration=56.67731428 podStartE2EDuration="56.67731428s" podCreationTimestamp="2026-03-10 01:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:36:01.677150095 +0000 UTC m=+60.860430574" watchObservedRunningTime="2026-03-10 01:36:01.67731428 +0000 UTC m=+60.860594760" Mar 10 01:36:02.594098 kubelet[2612]: E0310 01:36:02.593236 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:02.595300 kubelet[2612]: E0310 01:36:02.594145 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:02.866807 sudo[1636]: pam_unix(sudo:session): session closed for user root Mar 10 01:36:02.875621 sshd[1633]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:02.895345 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:38322.service: Deactivated successfully. Mar 10 01:36:02.896014 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:36:02.901734 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:36:02.902843 systemd[1]: session-7.scope: Consumed 18.284s CPU time, 162.4M memory peak, 0B memory swap peak. Mar 10 01:36:02.909603 systemd-logind[1446]: Removed session 7. Mar 10 01:36:03.596598 kubelet[2612]: E0310 01:36:03.595863 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:03.597790 kubelet[2612]: E0310 01:36:03.597705 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:16.494388 kubelet[2612]: E0310 01:36:16.494265 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:16.494388 kubelet[2612]: E0310 01:36:16.494434 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:24.308426 kubelet[2612]: E0310 01:36:24.127449 2612 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.589s" Mar 10 01:36:30.497947 kubelet[2612]: E0310 01:36:30.497694 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:41.500372 kubelet[2612]: E0310 01:36:41.496758 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:10.497055 kubelet[2612]: E0310 01:37:10.496907 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:12.382864 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:54388.service - OpenSSH per-connection server daemon (10.0.0.1:54388). Mar 10 01:37:12.659420 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 54388 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:12.663105 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:12.713130 systemd-logind[1446]: New session 8 of user core. Mar 10 01:37:12.723932 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:37:13.102781 sshd[4188]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:13.123411 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:54388.service: Deactivated successfully. Mar 10 01:37:13.139899 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:37:13.146213 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:37:13.153361 systemd-logind[1446]: Removed session 8. Mar 10 01:37:14.499789 kubelet[2612]: E0310 01:37:14.499230 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:18.140238 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:54398.service - OpenSSH per-connection server daemon (10.0.0.1:54398). Mar 10 01:37:18.207323 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 54398 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:18.215623 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:18.235261 systemd-logind[1446]: New session 9 of user core. Mar 10 01:37:18.243619 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:37:18.466304 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:18.477218 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:54398.service: Deactivated successfully. Mar 10 01:37:18.484740 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:37:18.486469 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:37:18.490251 systemd-logind[1446]: Removed session 9. Mar 10 01:37:21.519110 kubelet[2612]: E0310 01:37:21.516919 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:23.487274 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:49484.service - OpenSSH per-connection server daemon (10.0.0.1:49484). Mar 10 01:37:23.581091 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 49484 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:23.585493 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:23.603421 systemd-logind[1446]: New session 10 of user core. Mar 10 01:37:23.609778 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:37:23.829714 sshd[4222]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:23.840204 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:49484.service: Deactivated successfully. Mar 10 01:37:23.847933 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:37:23.851987 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:37:23.859651 systemd-logind[1446]: Removed session 10. Mar 10 01:37:28.893705 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:49500.service - OpenSSH per-connection server daemon (10.0.0.1:49500). Mar 10 01:37:28.941470 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 49500 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:28.945757 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:28.986052 systemd-logind[1446]: New session 11 of user core. Mar 10 01:37:28.999597 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:37:29.276801 sshd[4238]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:29.287964 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:49500.service: Deactivated successfully. Mar 10 01:37:29.292020 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:37:29.293289 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:37:29.297266 systemd-logind[1446]: Removed session 11. Mar 10 01:37:32.494428 kubelet[2612]: E0310 01:37:32.494228 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:34.323928 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:43222.service - OpenSSH per-connection server daemon (10.0.0.1:43222). Mar 10 01:37:34.422777 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 43222 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:34.426469 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:34.450758 systemd-logind[1446]: New session 12 of user core. Mar 10 01:37:34.461887 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:37:34.495351 kubelet[2612]: E0310 01:37:34.493797 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:34.804135 sshd[4253]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:34.815630 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:43222.service: Deactivated successfully. Mar 10 01:37:34.828726 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:37:34.831631 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:37:34.835451 systemd-logind[1446]: Removed session 12. Mar 10 01:37:38.495653 kubelet[2612]: E0310 01:37:38.495220 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:39.855371 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:43226.service - OpenSSH per-connection server daemon (10.0.0.1:43226). Mar 10 01:37:39.904419 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 43226 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:39.911079 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:39.930693 systemd-logind[1446]: New session 13 of user core. Mar 10 01:37:39.950687 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:37:40.149576 sshd[4269]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:40.155948 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:43226.service: Deactivated successfully. Mar 10 01:37:40.159300 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:37:40.163952 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:37:40.165894 systemd-logind[1446]: Removed session 13. Mar 10 01:37:45.185691 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:44512.service - OpenSSH per-connection server daemon (10.0.0.1:44512). Mar 10 01:37:45.237968 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 44512 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:45.241254 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:45.262307 systemd-logind[1446]: New session 14 of user core. Mar 10 01:37:45.274120 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:37:45.495094 kubelet[2612]: E0310 01:37:45.494363 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:45.502658 sshd[4286]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:45.521885 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:44512.service: Deactivated successfully. Mar 10 01:37:45.527671 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:37:45.532764 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:37:45.549453 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:44524.service - OpenSSH per-connection server daemon (10.0.0.1:44524). Mar 10 01:37:45.553911 systemd-logind[1446]: Removed session 14. Mar 10 01:37:45.634308 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 44524 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:45.638174 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:45.654734 systemd-logind[1446]: New session 15 of user core. Mar 10 01:37:45.670946 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:37:46.013330 sshd[4301]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:46.031724 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:44524.service: Deactivated successfully. Mar 10 01:37:46.036291 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:37:46.040437 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:37:46.062922 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Mar 10 01:37:46.068339 systemd-logind[1446]: Removed session 15. Mar 10 01:37:46.170902 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:46.178050 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:46.201169 systemd-logind[1446]: New session 16 of user core. Mar 10 01:37:46.220104 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:37:46.500185 sshd[4314]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:46.509298 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:44534.service: Deactivated successfully. Mar 10 01:37:46.514930 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:37:46.520093 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:37:46.524778 systemd-logind[1446]: Removed session 16. Mar 10 01:37:51.593158 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:44540.service - OpenSSH per-connection server daemon (10.0.0.1:44540). Mar 10 01:37:51.751774 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 44540 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:51.784752 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:51.804263 systemd-logind[1446]: New session 17 of user core. Mar 10 01:37:51.830407 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:37:52.742666 sshd[4329]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:52.755484 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:44540.service: Deactivated successfully. Mar 10 01:37:52.787276 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:37:52.789167 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:37:52.801883 systemd-logind[1446]: Removed session 17. Mar 10 01:37:57.790130 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:55738.service - OpenSSH per-connection server daemon (10.0.0.1:55738). Mar 10 01:37:58.204354 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 55738 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:58.233121 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:58.294443 systemd-logind[1446]: New session 18 of user core. Mar 10 01:37:58.368162 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:38:00.583712 kubelet[2612]: E0310 01:38:00.583184 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:00.590436 sshd[4343]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:00.689019 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:55738.service: Deactivated successfully. Mar 10 01:38:00.735336 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:38:00.738069 systemd[1]: session-18.scope: Consumed 1.331s CPU time. Mar 10 01:38:00.739707 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:38:00.747950 systemd-logind[1446]: Removed session 18. Mar 10 01:38:05.810720 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:39486.service - OpenSSH per-connection server daemon (10.0.0.1:39486). Mar 10 01:38:06.391195 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 39486 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:06.411880 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:06.429020 systemd-logind[1446]: New session 19 of user core. Mar 10 01:38:06.436834 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:38:06.642675 sshd[4361]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:06.652664 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:39486.service: Deactivated successfully. Mar 10 01:38:06.677146 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:38:06.681610 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:38:06.685103 systemd-logind[1446]: Removed session 19. Mar 10 01:38:11.568766 kubelet[2612]: E0310 01:38:11.568478 2612 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.939s" Mar 10 01:38:11.721591 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:39500.service - OpenSSH per-connection server daemon (10.0.0.1:39500). Mar 10 01:38:11.828681 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 39500 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:11.834701 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:11.865010 systemd-logind[1446]: New session 20 of user core. Mar 10 01:38:11.884011 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:38:13.204463 sshd[4376]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:13.217761 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:39500.service: Deactivated successfully. Mar 10 01:38:13.231399 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:38:13.237052 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:38:13.250890 systemd-logind[1446]: Removed session 20. Mar 10 01:38:14.607062 kubelet[2612]: E0310 01:38:14.606672 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:18.231191 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). Mar 10 01:38:18.304606 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:18.311946 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:18.343105 systemd-logind[1446]: New session 21 of user core. Mar 10 01:38:18.353977 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:38:18.666820 sshd[4396]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:18.674987 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:40266.service: Deactivated successfully. Mar 10 01:38:18.680366 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:38:18.683647 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:38:18.692822 systemd-logind[1446]: Removed session 21. Mar 10 01:38:21.494106 kubelet[2612]: E0310 01:38:21.493970 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:23.698349 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). Mar 10 01:38:23.775426 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:23.787603 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:23.797405 systemd-logind[1446]: New session 22 of user core. Mar 10 01:38:23.806979 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:38:24.064988 sshd[4410]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:24.081147 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:35074.service: Deactivated successfully. Mar 10 01:38:24.085496 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:38:24.092394 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:38:24.103758 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:35082.service - OpenSSH per-connection server daemon (10.0.0.1:35082). Mar 10 01:38:24.106081 systemd-logind[1446]: Removed session 22. Mar 10 01:38:24.186064 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 35082 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:24.192431 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:24.209397 systemd-logind[1446]: New session 23 of user core. Mar 10 01:38:24.223467 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:38:24.874156 sshd[4424]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:24.895670 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:35082.service: Deactivated successfully. Mar 10 01:38:24.903082 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:38:24.908230 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:38:24.917952 systemd[1]: Started sshd@23-10.0.0.148:22-10.0.0.1:35092.service - OpenSSH per-connection server daemon (10.0.0.1:35092). Mar 10 01:38:24.921007 systemd-logind[1446]: Removed session 23. Mar 10 01:38:25.001384 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:25.005718 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:25.020765 systemd-logind[1446]: New session 24 of user core. Mar 10 01:38:25.031043 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:38:27.793156 update_engine[1448]: I20260310 01:38:27.777361 1448 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 10 01:38:27.793156 update_engine[1448]: I20260310 01:38:27.784674 1448 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:28.732970 1448 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.143956 1448 omaha_request_params.cc:62] Current group set to lts Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.231045 1448 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.231091 1448 update_attempter.cc:643] Scheduling an action processor start. Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.231238 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.234982 1448 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.235172 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.235194 1448 omaha_request_action.cc:272] Request: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: Mar 10 01:38:29.245608 update_engine[1448]: I20260310 01:38:29.242194 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:38:29.247109 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 10 01:38:29.263667 update_engine[1448]: I20260310 01:38:29.260962 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:38:29.263667 update_engine[1448]: I20260310 01:38:29.262828 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:38:29.279884 update_engine[1448]: E20260310 01:38:29.279695 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:38:29.279884 update_engine[1448]: I20260310 01:38:29.279832 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 10 01:38:29.321055 kubelet[2612]: E0310 01:38:29.320859 2612 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.616s" Mar 10 01:38:33.432960 kubelet[2612]: E0310 01:38:33.432687 2612 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.598s" Mar 10 01:38:34.496666 kubelet[2612]: E0310 01:38:34.496191 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:34.889499 sshd[4437]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:34.931368 systemd[1]: sshd@23-10.0.0.148:22-10.0.0.1:35092.service: Deactivated successfully. Mar 10 01:38:34.941005 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:38:34.942455 systemd[1]: session-24.scope: Consumed 3.690s CPU time. Mar 10 01:38:34.951120 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:38:34.976631 systemd[1]: Started sshd@24-10.0.0.148:22-10.0.0.1:39530.service - OpenSSH per-connection server daemon (10.0.0.1:39530). Mar 10 01:38:34.981005 systemd-logind[1446]: Removed session 24. Mar 10 01:38:35.078917 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 39530 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:35.080601 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:35.103821 systemd-logind[1446]: New session 25 of user core. Mar 10 01:38:35.113352 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:38:35.895947 sshd[4459]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:35.910032 systemd[1]: sshd@24-10.0.0.148:22-10.0.0.1:39530.service: Deactivated successfully. Mar 10 01:38:35.916104 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:38:35.918712 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:38:35.964003 systemd[1]: Started sshd@25-10.0.0.148:22-10.0.0.1:39534.service - OpenSSH per-connection server daemon (10.0.0.1:39534). Mar 10 01:38:35.986116 systemd-logind[1446]: Removed session 25. Mar 10 01:38:36.052991 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 39534 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:36.063634 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:36.078231 systemd-logind[1446]: New session 26 of user core. Mar 10 01:38:36.088407 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:38:36.345097 sshd[4473]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:36.365055 systemd[1]: sshd@25-10.0.0.148:22-10.0.0.1:39534.service: Deactivated successfully. Mar 10 01:38:36.370191 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:38:36.373765 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:38:36.380078 systemd-logind[1446]: Removed session 26. Mar 10 01:38:39.650761 update_engine[1448]: I20260310 01:38:39.649103 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:38:39.650761 update_engine[1448]: I20260310 01:38:39.649873 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:38:39.650761 update_engine[1448]: I20260310 01:38:39.650585 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:38:39.678762 update_engine[1448]: E20260310 01:38:39.678634 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:38:39.679021 update_engine[1448]: I20260310 01:38:39.678988 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 10 01:38:41.389451 systemd[1]: Started sshd@26-10.0.0.148:22-10.0.0.1:39546.service - OpenSSH per-connection server daemon (10.0.0.1:39546). Mar 10 01:38:41.528588 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 39546 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:41.536147 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:41.571870 systemd-logind[1446]: New session 27 of user core. Mar 10 01:38:41.583165 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:38:41.863418 sshd[4487]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:41.877675 systemd[1]: sshd@26-10.0.0.148:22-10.0.0.1:39546.service: Deactivated successfully. Mar 10 01:38:41.883044 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:38:41.887647 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:38:41.893008 systemd-logind[1446]: Removed session 27. Mar 10 01:38:42.494940 kubelet[2612]: E0310 01:38:42.494692 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:46.905832 systemd[1]: Started sshd@27-10.0.0.148:22-10.0.0.1:56898.service - OpenSSH per-connection server daemon (10.0.0.1:56898). Mar 10 01:38:46.993753 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 56898 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:46.998638 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:47.016225 systemd-logind[1446]: New session 28 of user core. Mar 10 01:38:47.027632 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:38:47.355012 sshd[4504]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:47.366254 systemd[1]: sshd@27-10.0.0.148:22-10.0.0.1:56898.service: Deactivated successfully. Mar 10 01:38:47.377260 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:38:47.384651 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:38:47.388865 systemd-logind[1446]: Removed session 28. Mar 10 01:38:47.493946 kubelet[2612]: E0310 01:38:47.493747 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:49.645750 update_engine[1448]: I20260310 01:38:49.645606 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:38:49.646496 update_engine[1448]: I20260310 01:38:49.646087 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:38:49.647240 update_engine[1448]: I20260310 01:38:49.647133 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:38:49.673415 update_engine[1448]: E20260310 01:38:49.671741 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:38:49.673415 update_engine[1448]: I20260310 01:38:49.671885 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 10 01:38:52.397095 systemd[1]: Started sshd@28-10.0.0.148:22-10.0.0.1:37556.service - OpenSSH per-connection server daemon (10.0.0.1:37556). Mar 10 01:38:52.514421 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 37556 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:52.522191 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:52.535123 systemd-logind[1446]: New session 29 of user core. Mar 10 01:38:52.544699 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:38:52.842022 sshd[4519]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:52.850194 systemd[1]: sshd@28-10.0.0.148:22-10.0.0.1:37556.service: Deactivated successfully. Mar 10 01:38:52.854857 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:38:52.859004 systemd-logind[1446]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:38:52.862275 systemd-logind[1446]: Removed session 29. Mar 10 01:38:57.878212 systemd[1]: Started sshd@29-10.0.0.148:22-10.0.0.1:37568.service - OpenSSH per-connection server daemon (10.0.0.1:37568). Mar 10 01:38:57.937906 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 37568 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:38:57.941248 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:38:57.987084 systemd-logind[1446]: New session 30 of user core. Mar 10 01:38:57.995073 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:38:58.324921 sshd[4534]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:58.338679 systemd[1]: sshd@29-10.0.0.148:22-10.0.0.1:37568.service: Deactivated successfully. Mar 10 01:38:58.347007 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:38:58.351832 systemd-logind[1446]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:38:58.382430 systemd-logind[1446]: Removed session 30. Mar 10 01:38:59.646096 update_engine[1448]: I20260310 01:38:59.645103 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:38:59.646096 update_engine[1448]: I20260310 01:38:59.645748 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:38:59.647716 update_engine[1448]: I20260310 01:38:59.647326 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:38:59.679709 update_engine[1448]: E20260310 01:38:59.679641 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:38:59.680119 update_engine[1448]: I20260310 01:38:59.680084 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:38:59.680223 update_engine[1448]: I20260310 01:38:59.680197 1448 omaha_request_action.cc:617] Omaha request response: Mar 10 01:38:59.680922 update_engine[1448]: E20260310 01:38:59.680649 1448 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 10 01:38:59.680922 update_engine[1448]: I20260310 01:38:59.680847 1448 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 10 01:38:59.680922 update_engine[1448]: I20260310 01:38:59.680863 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:38:59.680922 update_engine[1448]: I20260310 01:38:59.680873 1448 update_attempter.cc:306] Processing Done. Mar 10 01:38:59.680922 update_engine[1448]: E20260310 01:38:59.680893 1448 update_attempter.cc:619] Update failed. Mar 10 01:38:59.680922 update_engine[1448]: I20260310 01:38:59.680907 1448 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 10 01:38:59.680922 update_engine[1448]: I20260310 01:38:59.680922 1448 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 10 01:38:59.680922 update_engine[1448]: I20260310 01:38:59.680931 1448 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 10 01:38:59.683139 update_engine[1448]: I20260310 01:38:59.681228 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:38:59.683139 update_engine[1448]: I20260310 01:38:59.681272 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:38:59.683139 update_engine[1448]: I20260310 01:38:59.681285 1448 omaha_request_action.cc:272] Request: Mar 10 01:38:59.683139 update_engine[1448]: Mar 10 01:38:59.683139 update_engine[1448]: Mar 10 01:38:59.683139 update_engine[1448]: Mar 10 01:38:59.683139 update_engine[1448]: Mar 10 01:38:59.683139 update_engine[1448]: Mar 10 01:38:59.683139 update_engine[1448]: Mar 10 01:38:59.683139 update_engine[1448]: I20260310 01:38:59.681298 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:38:59.683139 update_engine[1448]: I20260310 01:38:59.681762 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:38:59.683139 update_engine[1448]: I20260310 01:38:59.682154 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:38:59.683730 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 10 01:38:59.700119 update_engine[1448]: E20260310 01:38:59.699913 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:38:59.700119 update_engine[1448]: I20260310 01:38:59.700075 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:38:59.700119 update_engine[1448]: I20260310 01:38:59.700091 1448 omaha_request_action.cc:617] Omaha request response: Mar 10 01:38:59.700119 update_engine[1448]: I20260310 01:38:59.700103 1448 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:38:59.700119 update_engine[1448]: I20260310 01:38:59.700111 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:38:59.700801 update_engine[1448]: I20260310 01:38:59.700119 1448 update_attempter.cc:306] Processing Done. Mar 10 01:38:59.700801 update_engine[1448]: I20260310 01:38:59.700167 1448 update_attempter.cc:310] Error event sent. Mar 10 01:38:59.700801 update_engine[1448]: I20260310 01:38:59.700218 1448 update_check_scheduler.cc:74] Next update check in 44m53s Mar 10 01:38:59.701499 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 10 01:39:03.353039 systemd[1]: Started sshd@30-10.0.0.148:22-10.0.0.1:56722.service - OpenSSH per-connection server daemon (10.0.0.1:56722). Mar 10 01:39:03.415954 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 56722 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:03.418477 sshd[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:03.435637 systemd-logind[1446]: New session 31 of user core. Mar 10 01:39:03.447864 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:39:03.641272 sshd[4554]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:03.649199 systemd[1]: sshd@30-10.0.0.148:22-10.0.0.1:56722.service: Deactivated successfully. Mar 10 01:39:03.653907 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:39:03.658010 systemd-logind[1446]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:39:03.660873 systemd-logind[1446]: Removed session 31. Mar 10 01:39:04.495647 kubelet[2612]: E0310 01:39:04.495179 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:08.665114 kubelet[2612]: E0310 01:39:08.638811 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:09.193114 systemd[1]: Started sshd@31-10.0.0.148:22-10.0.0.1:56724.service - OpenSSH per-connection server daemon (10.0.0.1:56724). Mar 10 01:39:09.706469 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 56724 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:09.725357 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:09.804784 systemd-logind[1446]: New session 32 of user core. Mar 10 01:39:09.843865 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:39:10.881172 sshd[4569]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:10.892027 systemd[1]: sshd@31-10.0.0.148:22-10.0.0.1:56724.service: Deactivated successfully. Mar 10 01:39:10.929835 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:39:10.939765 systemd-logind[1446]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:39:10.943459 systemd-logind[1446]: Removed session 32. Mar 10 01:39:15.925719 systemd[1]: Started sshd@32-10.0.0.148:22-10.0.0.1:47658.service - OpenSSH per-connection server daemon (10.0.0.1:47658). Mar 10 01:39:15.998625 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 47658 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:16.003806 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:16.038719 systemd-logind[1446]: New session 33 of user core. Mar 10 01:39:16.052737 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:39:16.393307 sshd[4585]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:16.415342 systemd[1]: sshd@32-10.0.0.148:22-10.0.0.1:47658.service: Deactivated successfully. Mar 10 01:39:16.437451 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:39:16.453161 systemd-logind[1446]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:39:16.498730 systemd[1]: Started sshd@33-10.0.0.148:22-10.0.0.1:47662.service - OpenSSH per-connection server daemon (10.0.0.1:47662). Mar 10 01:39:16.504086 systemd-logind[1446]: Removed session 33. Mar 10 01:39:17.023695 sshd[4600]: Accepted publickey for core from 10.0.0.1 port 47662 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:17.033162 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:17.119633 systemd-logind[1446]: New session 34 of user core. Mar 10 01:39:17.134016 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:39:20.015308 containerd[1456]: time="2026-03-10T01:39:20.011843853Z" level=info msg="StopContainer for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" with timeout 30 (s)" Mar 10 01:39:20.017582 containerd[1456]: time="2026-03-10T01:39:20.016722255Z" level=info msg="Stop container \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" with signal terminated" Mar 10 01:39:20.046497 systemd[1]: run-containerd-runc-k8s.io-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40-runc.vwHwjT.mount: Deactivated successfully. Mar 10 01:39:20.111996 systemd[1]: cri-containerd-f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99.scope: Deactivated successfully. Mar 10 01:39:20.114675 systemd[1]: cri-containerd-f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99.scope: Consumed 3.500s CPU time. Mar 10 01:39:20.194238 containerd[1456]: time="2026-03-10T01:39:20.193778127Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:39:20.213804 containerd[1456]: time="2026-03-10T01:39:20.213752041Z" level=info msg="StopContainer for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" with timeout 2 (s)" Mar 10 01:39:20.214792 containerd[1456]: time="2026-03-10T01:39:20.214622848Z" level=info msg="Stop container \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" with signal terminated" Mar 10 01:39:20.250612 systemd-networkd[1380]: lxc_health: Link DOWN Mar 10 01:39:20.250626 systemd-networkd[1380]: lxc_health: Lost carrier Mar 10 01:39:20.282312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99-rootfs.mount: Deactivated successfully. Mar 10 01:39:20.321619 containerd[1456]: time="2026-03-10T01:39:20.320165257Z" level=info msg="shim disconnected" id=f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99 namespace=k8s.io Mar 10 01:39:20.321619 containerd[1456]: time="2026-03-10T01:39:20.320227032Z" level=warning msg="cleaning up after shim disconnected" id=f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99 namespace=k8s.io Mar 10 01:39:20.321619 containerd[1456]: time="2026-03-10T01:39:20.320244445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:20.341668 systemd[1]: cri-containerd-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40.scope: Deactivated successfully. Mar 10 01:39:20.344657 systemd[1]: cri-containerd-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40.scope: Consumed 25.118s CPU time. Mar 10 01:39:20.420803 containerd[1456]: time="2026-03-10T01:39:20.420736090Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:39:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:39:20.435282 containerd[1456]: time="2026-03-10T01:39:20.434271238Z" level=info msg="StopContainer for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" returns successfully" Mar 10 01:39:20.436654 containerd[1456]: time="2026-03-10T01:39:20.435856634Z" level=info msg="StopPodSandbox for \"30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18\"" Mar 10 01:39:20.436654 containerd[1456]: time="2026-03-10T01:39:20.435891529Z" level=info msg="Container to stop \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:39:20.440227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18-shm.mount: Deactivated successfully. Mar 10 01:39:20.469291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40-rootfs.mount: Deactivated successfully. Mar 10 01:39:20.491062 systemd[1]: cri-containerd-30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18.scope: Deactivated successfully. Mar 10 01:39:20.513315 containerd[1456]: time="2026-03-10T01:39:20.512906926Z" level=info msg="shim disconnected" id=79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40 namespace=k8s.io Mar 10 01:39:20.513315 containerd[1456]: time="2026-03-10T01:39:20.512981155Z" level=warning msg="cleaning up after shim disconnected" id=79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40 namespace=k8s.io Mar 10 01:39:20.513315 containerd[1456]: time="2026-03-10T01:39:20.512995571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:20.592651 containerd[1456]: time="2026-03-10T01:39:20.592258010Z" level=info msg="shim disconnected" id=30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18 namespace=k8s.io Mar 10 01:39:20.592651 containerd[1456]: time="2026-03-10T01:39:20.592368486Z" level=warning msg="cleaning up after shim disconnected" id=30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18 namespace=k8s.io Mar 10 01:39:20.592651 containerd[1456]: time="2026-03-10T01:39:20.592390618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:20.597287 containerd[1456]: time="2026-03-10T01:39:20.596891640Z" level=info msg="StopContainer for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" returns successfully" Mar 10 01:39:20.600588 containerd[1456]: time="2026-03-10T01:39:20.599385742Z" level=info msg="StopPodSandbox for \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\"" Mar 10 01:39:20.600588 containerd[1456]: time="2026-03-10T01:39:20.599480398Z" level=info msg="Container to stop \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:39:20.600588 containerd[1456]: time="2026-03-10T01:39:20.599597517Z" level=info msg="Container to stop \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:39:20.600588 containerd[1456]: time="2026-03-10T01:39:20.599857013Z" level=info msg="Container to stop \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:39:20.600588 containerd[1456]: time="2026-03-10T01:39:20.599876469Z" level=info msg="Container to stop \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:39:20.600588 containerd[1456]: time="2026-03-10T01:39:20.599896536Z" level=info msg="Container to stop \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:39:20.624763 systemd[1]: cri-containerd-1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011.scope: Deactivated successfully. Mar 10 01:39:20.647328 containerd[1456]: time="2026-03-10T01:39:20.647150493Z" level=info msg="TearDown network for sandbox \"30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18\" successfully" Mar 10 01:39:20.647328 containerd[1456]: time="2026-03-10T01:39:20.647200747Z" level=info msg="StopPodSandbox for \"30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18\" returns successfully" Mar 10 01:39:20.737170 containerd[1456]: time="2026-03-10T01:39:20.736981423Z" level=info msg="shim disconnected" id=1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011 namespace=k8s.io Mar 10 01:39:20.737170 containerd[1456]: time="2026-03-10T01:39:20.737070539Z" level=warning msg="cleaning up after shim disconnected" id=1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011 namespace=k8s.io Mar 10 01:39:20.737170 containerd[1456]: time="2026-03-10T01:39:20.737083433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:20.780591 containerd[1456]: time="2026-03-10T01:39:20.780397203Z" level=info msg="TearDown network for sandbox \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" successfully" Mar 10 01:39:20.780591 containerd[1456]: time="2026-03-10T01:39:20.780501027Z" level=info msg="StopPodSandbox for \"1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011\" returns successfully" Mar 10 01:39:20.850917 kubelet[2612]: I0310 01:39:20.849374 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-cilium-config-path\") pod \"a3f880b4-bcb2-44b1-b2fb-16ea234180ea\" (UID: \"a3f880b4-bcb2-44b1-b2fb-16ea234180ea\") " Mar 10 01:39:20.850917 kubelet[2612]: I0310 01:39:20.849616 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8g78\" (UniqueName: \"kubernetes.io/projected/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-kube-api-access-c8g78\") pod \"a3f880b4-bcb2-44b1-b2fb-16ea234180ea\" (UID: \"a3f880b4-bcb2-44b1-b2fb-16ea234180ea\") " Mar 10 01:39:20.859395 kubelet[2612]: I0310 01:39:20.858951 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3f880b4-bcb2-44b1-b2fb-16ea234180ea" (UID: "a3f880b4-bcb2-44b1-b2fb-16ea234180ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:39:20.867060 kubelet[2612]: I0310 01:39:20.866477 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-kube-api-access-c8g78" (OuterVolumeSpecName: "kube-api-access-c8g78") pod "a3f880b4-bcb2-44b1-b2fb-16ea234180ea" (UID: "a3f880b4-bcb2-44b1-b2fb-16ea234180ea"). InnerVolumeSpecName "kube-api-access-c8g78". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:39:20.963342 kubelet[2612]: I0310 01:39:20.950475 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-config-path\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963342 kubelet[2612]: I0310 01:39:20.950676 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-etc-cni-netd\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963342 kubelet[2612]: I0310 01:39:20.950708 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cni-path\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963342 kubelet[2612]: I0310 01:39:20.950737 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hubble-tls\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963342 kubelet[2612]: I0310 01:39:20.950807 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hostproc\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963342 kubelet[2612]: I0310 01:39:20.950833 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-run\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963928 kubelet[2612]: I0310 01:39:20.950852 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-cgroup\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963928 kubelet[2612]: I0310 01:39:20.950870 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-xtables-lock\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963928 kubelet[2612]: I0310 01:39:20.950891 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-kernel\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963928 kubelet[2612]: I0310 01:39:20.950916 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01ef80d5-fbce-4009-bc6a-86a4ac82a706-clustermesh-secrets\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963928 kubelet[2612]: I0310 01:39:20.950938 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-net\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.963928 kubelet[2612]: I0310 01:39:20.950963 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7vmx\" (UniqueName: \"kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-kube-api-access-j7vmx\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.964169 kubelet[2612]: I0310 01:39:20.950983 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-bpf-maps\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.964169 kubelet[2612]: I0310 01:39:20.951001 2612 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-lib-modules\") pod \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\" (UID: \"01ef80d5-fbce-4009-bc6a-86a4ac82a706\") " Mar 10 01:39:20.964169 kubelet[2612]: I0310 01:39:20.951045 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:20.964169 kubelet[2612]: I0310 01:39:20.951059 2612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8g78\" (UniqueName: \"kubernetes.io/projected/a3f880b4-bcb2-44b1-b2fb-16ea234180ea-kube-api-access-c8g78\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:20.964169 kubelet[2612]: I0310 01:39:20.951246 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.964169 kubelet[2612]: I0310 01:39:20.952945 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.964669 kubelet[2612]: I0310 01:39:20.953032 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.964669 kubelet[2612]: I0310 01:39:20.953066 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cni-path" (OuterVolumeSpecName: "cni-path") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.966641 kubelet[2612]: I0310 01:39:20.966277 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:39:20.967316 kubelet[2612]: I0310 01:39:20.967110 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.968807 kubelet[2612]: I0310 01:39:20.968687 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.968807 kubelet[2612]: I0310 01:39:20.968787 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.969312 kubelet[2612]: I0310 01:39:20.968825 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.969312 kubelet[2612]: I0310 01:39:20.968698 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.969770 kubelet[2612]: I0310 01:39:20.969657 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hostproc" (OuterVolumeSpecName: "hostproc") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:39:20.973177 kubelet[2612]: I0310 01:39:20.973050 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef80d5-fbce-4009-bc6a-86a4ac82a706-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:39:20.975040 kubelet[2612]: I0310 01:39:20.974985 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:39:20.979357 kubelet[2612]: I0310 01:39:20.979276 2612 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-kube-api-access-j7vmx" (OuterVolumeSpecName: "kube-api-access-j7vmx") pod "01ef80d5-fbce-4009-bc6a-86a4ac82a706" (UID: "01ef80d5-fbce-4009-bc6a-86a4ac82a706"). InnerVolumeSpecName "kube-api-access-j7vmx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:39:21.034495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011-rootfs.mount: Deactivated successfully. Mar 10 01:39:21.034791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30db175c25132a7cf879648319337973aff3abb1f38166715019f900eb8c0b18-rootfs.mount: Deactivated successfully. Mar 10 01:39:21.034928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d9c9723d7161f08ffa34eba62f9c18aa501c1e84b6a6ebe3b8255ab7fab9011-shm.mount: Deactivated successfully. Mar 10 01:39:21.035048 systemd[1]: var-lib-kubelet-pods-a3f880b4\x2dbcb2\x2d44b1\x2db2fb\x2d16ea234180ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc8g78.mount: Deactivated successfully. Mar 10 01:39:21.035173 systemd[1]: var-lib-kubelet-pods-01ef80d5\x2dfbce\x2d4009\x2dbc6a\x2d86a4ac82a706-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj7vmx.mount: Deactivated successfully. Mar 10 01:39:21.035347 systemd[1]: var-lib-kubelet-pods-01ef80d5\x2dfbce\x2d4009\x2dbc6a\x2d86a4ac82a706-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 10 01:39:21.035602 systemd[1]: var-lib-kubelet-pods-01ef80d5\x2dfbce\x2d4009\x2dbc6a\x2d86a4ac82a706-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 10 01:39:21.051958 kubelet[2612]: I0310 01:39:21.051820 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.051958 kubelet[2612]: I0310 01:39:21.051915 2612 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.051958 kubelet[2612]: I0310 01:39:21.051935 2612 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.051958 kubelet[2612]: I0310 01:39:21.051951 2612 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01ef80d5-fbce-4009-bc6a-86a4ac82a706-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.051958 kubelet[2612]: I0310 01:39:21.051967 2612 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.051958 kubelet[2612]: I0310 01:39:21.051980 2612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j7vmx\" (UniqueName: \"kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-kube-api-access-j7vmx\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.051996 2612 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052009 2612 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052022 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052036 2612 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052048 2612 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052060 2612 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052074 2612 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.052467 kubelet[2612]: I0310 01:39:21.052086 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01ef80d5-fbce-4009-bc6a-86a4ac82a706-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 10 01:39:21.308587 kubelet[2612]: I0310 01:39:21.307681 2612 scope.go:117] "RemoveContainer" containerID="f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99" Mar 10 01:39:21.313541 containerd[1456]: time="2026-03-10T01:39:21.312326405Z" level=info msg="RemoveContainer for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\"" Mar 10 01:39:21.325484 systemd[1]: Removed slice kubepods-besteffort-poda3f880b4_bcb2_44b1_b2fb_16ea234180ea.slice - libcontainer container kubepods-besteffort-poda3f880b4_bcb2_44b1_b2fb_16ea234180ea.slice. Mar 10 01:39:21.326001 systemd[1]: kubepods-besteffort-poda3f880b4_bcb2_44b1_b2fb_16ea234180ea.slice: Consumed 3.629s CPU time. Mar 10 01:39:21.331781 systemd[1]: Removed slice kubepods-burstable-pod01ef80d5_fbce_4009_bc6a_86a4ac82a706.slice - libcontainer container kubepods-burstable-pod01ef80d5_fbce_4009_bc6a_86a4ac82a706.slice. Mar 10 01:39:21.331924 systemd[1]: kubepods-burstable-pod01ef80d5_fbce_4009_bc6a_86a4ac82a706.slice: Consumed 25.384s CPU time. Mar 10 01:39:21.336616 containerd[1456]: time="2026-03-10T01:39:21.336574375Z" level=info msg="RemoveContainer for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" returns successfully" Mar 10 01:39:21.337089 kubelet[2612]: I0310 01:39:21.337059 2612 scope.go:117] "RemoveContainer" containerID="f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99" Mar 10 01:39:21.337664 containerd[1456]: time="2026-03-10T01:39:21.337615250Z" level=error msg="ContainerStatus for \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\": not found" Mar 10 01:39:21.338135 kubelet[2612]: E0310 01:39:21.338052 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\": not found" containerID="f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99" Mar 10 01:39:21.338209 kubelet[2612]: I0310 01:39:21.338128 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99"} err="failed to get container status \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\": rpc error: code = NotFound desc = an error occurred when try to find container \"f94e3234f998aa9fc14c4f5814c1fc367b6b61e077aadf239fb06515a0e11a99\": not found" Mar 10 01:39:21.338209 kubelet[2612]: I0310 01:39:21.338173 2612 scope.go:117] "RemoveContainer" containerID="79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40" Mar 10 01:39:21.342371 containerd[1456]: time="2026-03-10T01:39:21.342265648Z" level=info msg="RemoveContainer for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\"" Mar 10 01:39:21.377736 containerd[1456]: time="2026-03-10T01:39:21.377627236Z" level=info msg="RemoveContainer for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" returns successfully" Mar 10 01:39:21.377976 kubelet[2612]: I0310 01:39:21.377936 2612 scope.go:117] "RemoveContainer" containerID="981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3" Mar 10 01:39:21.380381 containerd[1456]: time="2026-03-10T01:39:21.380023264Z" level=info msg="RemoveContainer for \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\"" Mar 10 01:39:21.387184 containerd[1456]: time="2026-03-10T01:39:21.386688211Z" level=info msg="RemoveContainer for \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\" returns successfully" Mar 10 01:39:21.387262 kubelet[2612]: I0310 01:39:21.386983 2612 scope.go:117] "RemoveContainer" containerID="a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710" Mar 10 01:39:21.392364 containerd[1456]: time="2026-03-10T01:39:21.392033323Z" level=info msg="RemoveContainer for \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\"" Mar 10 01:39:21.402976 containerd[1456]: time="2026-03-10T01:39:21.402815358Z" level=info msg="RemoveContainer for \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\" returns successfully" Mar 10 01:39:21.403182 kubelet[2612]: I0310 01:39:21.403152 2612 scope.go:117] "RemoveContainer" containerID="603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526" Mar 10 01:39:21.406330 containerd[1456]: time="2026-03-10T01:39:21.406297076Z" level=info msg="RemoveContainer for \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\"" Mar 10 01:39:21.415266 containerd[1456]: time="2026-03-10T01:39:21.415056004Z" level=info msg="RemoveContainer for \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\" returns successfully" Mar 10 01:39:21.415627 kubelet[2612]: I0310 01:39:21.415377 2612 scope.go:117] "RemoveContainer" containerID="afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7" Mar 10 01:39:21.417204 containerd[1456]: time="2026-03-10T01:39:21.417113597Z" level=info msg="RemoveContainer for \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\"" Mar 10 01:39:21.429159 containerd[1456]: time="2026-03-10T01:39:21.428888788Z" level=info msg="RemoveContainer for \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\" returns successfully" Mar 10 01:39:21.429264 kubelet[2612]: I0310 01:39:21.429213 2612 scope.go:117] "RemoveContainer" containerID="79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40" Mar 10 01:39:21.432086 containerd[1456]: time="2026-03-10T01:39:21.431984302Z" level=error msg="ContainerStatus for \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\": not found" Mar 10 01:39:21.432607 kubelet[2612]: E0310 01:39:21.432462 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\": not found" containerID="79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40" Mar 10 01:39:21.432976 kubelet[2612]: I0310 01:39:21.432708 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40"} err="failed to get container status \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\": rpc error: code = NotFound desc = an error occurred when try to find container \"79a72318593d8bef6f8e3027fd2c1d46abc914fe89db99c958799dccda0f9f40\": not found" Mar 10 01:39:21.432976 kubelet[2612]: I0310 01:39:21.432757 2612 scope.go:117] "RemoveContainer" containerID="981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3" Mar 10 01:39:21.433995 containerd[1456]: time="2026-03-10T01:39:21.433822187Z" level=error msg="ContainerStatus for \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\": not found" Mar 10 01:39:21.434672 kubelet[2612]: E0310 01:39:21.434263 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\": not found" containerID="981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3" Mar 10 01:39:21.434672 kubelet[2612]: I0310 01:39:21.434303 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3"} err="failed to get container status \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"981b950cd4506cfd4418553c364deb71daf4d29eeaf33d9a3f9e553c26e490a3\": not found" Mar 10 01:39:21.434672 kubelet[2612]: I0310 01:39:21.434349 2612 scope.go:117] "RemoveContainer" containerID="a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710" Mar 10 01:39:21.435116 containerd[1456]: time="2026-03-10T01:39:21.435011550Z" level=error msg="ContainerStatus for \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\": not found" Mar 10 01:39:21.435520 kubelet[2612]: E0310 01:39:21.435380 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\": not found" containerID="a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710" Mar 10 01:39:21.435520 kubelet[2612]: I0310 01:39:21.435475 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710"} err="failed to get container status \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6339a6a39bca7bab201ba6742607341b649c02f9e4fdb7a8aca89de5510d710\": not found" Mar 10 01:39:21.435633 kubelet[2612]: I0310 01:39:21.435597 2612 scope.go:117] "RemoveContainer" containerID="603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526" Mar 10 01:39:21.436496 containerd[1456]: time="2026-03-10T01:39:21.436356380Z" level=error msg="ContainerStatus for \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\": not found" Mar 10 01:39:21.437010 kubelet[2612]: E0310 01:39:21.436839 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\": not found" containerID="603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526" Mar 10 01:39:21.437010 kubelet[2612]: I0310 01:39:21.436914 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526"} err="failed to get container status \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\": rpc error: code = NotFound desc = an error occurred when try to find container \"603ed5987adef42da34b4dd91f20fe9788cf4c249ee432ac71cd238ef24d7526\": not found" Mar 10 01:39:21.437010 kubelet[2612]: I0310 01:39:21.436945 2612 scope.go:117] "RemoveContainer" containerID="afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7" Mar 10 01:39:21.437578 containerd[1456]: time="2026-03-10T01:39:21.437354518Z" level=error msg="ContainerStatus for \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\": not found" Mar 10 01:39:21.437831 kubelet[2612]: E0310 01:39:21.437771 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\": not found" containerID="afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7" Mar 10 01:39:21.438029 kubelet[2612]: I0310 01:39:21.437840 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7"} err="failed to get container status \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"afa3ce8a8796a1d8052201aedd9f8b03591df19376a922b01977eb5c7a10e0d7\": not found" Mar 10 01:39:21.499344 kubelet[2612]: I0310 01:39:21.499013 2612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ef80d5-fbce-4009-bc6a-86a4ac82a706" path="/var/lib/kubelet/pods/01ef80d5-fbce-4009-bc6a-86a4ac82a706/volumes" Mar 10 01:39:21.506607 kubelet[2612]: I0310 01:39:21.505046 2612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f880b4-bcb2-44b1-b2fb-16ea234180ea" path="/var/lib/kubelet/pods/a3f880b4-bcb2-44b1-b2fb-16ea234180ea/volumes" Mar 10 01:39:21.742927 sshd[4600]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:21.769819 systemd[1]: sshd@33-10.0.0.148:22-10.0.0.1:47662.service: Deactivated successfully. Mar 10 01:39:21.773309 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:39:21.773773 systemd[1]: session-34.scope: Consumed 1.699s CPU time. Mar 10 01:39:21.775083 systemd-logind[1446]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:39:21.792149 systemd[1]: Started sshd@34-10.0.0.148:22-10.0.0.1:47666.service - OpenSSH per-connection server daemon (10.0.0.1:47666). Mar 10 01:39:21.794662 systemd-logind[1446]: Removed session 34. Mar 10 01:39:21.873922 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 47666 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:21.882765 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:21.900109 systemd-logind[1446]: New session 35 of user core. Mar 10 01:39:21.906191 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 10 01:39:22.494556 kubelet[2612]: E0310 01:39:22.494413 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:22.697965 sshd[4764]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:22.705955 systemd[1]: sshd@34-10.0.0.148:22-10.0.0.1:47666.service: Deactivated successfully. Mar 10 01:39:22.708983 systemd[1]: session-35.scope: Deactivated successfully. Mar 10 01:39:22.712877 systemd-logind[1446]: Session 35 logged out. Waiting for processes to exit. Mar 10 01:39:22.722313 systemd[1]: Started sshd@35-10.0.0.148:22-10.0.0.1:56744.service - OpenSSH per-connection server daemon (10.0.0.1:56744). Mar 10 01:39:22.726081 systemd-logind[1446]: Removed session 35. Mar 10 01:39:22.802475 systemd[1]: Created slice kubepods-burstable-pod43b939ad_2931_4fcc_a8c0_e03e9cee19cc.slice - libcontainer container kubepods-burstable-pod43b939ad_2931_4fcc_a8c0_e03e9cee19cc.slice. Mar 10 01:39:22.828590 sshd[4778]: Accepted publickey for core from 10.0.0.1 port 56744 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:22.834001 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:22.845769 systemd-logind[1446]: New session 36 of user core. Mar 10 01:39:22.852088 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 10 01:39:22.880860 kubelet[2612]: I0310 01:39:22.879991 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-cni-path\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881014 kubelet[2612]: I0310 01:39:22.880888 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-cilium-ipsec-secrets\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881014 kubelet[2612]: I0310 01:39:22.880939 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-lib-modules\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881099 kubelet[2612]: I0310 01:39:22.880973 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-cilium-config-path\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881099 kubelet[2612]: I0310 01:39:22.881048 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-host-proc-sys-net\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881099 kubelet[2612]: I0310 01:39:22.881086 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-cilium-run\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881185 kubelet[2612]: I0310 01:39:22.881115 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-cilium-cgroup\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881185 kubelet[2612]: I0310 01:39:22.881138 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-xtables-lock\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881185 kubelet[2612]: I0310 01:39:22.881164 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-bpf-maps\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881355 kubelet[2612]: I0310 01:39:22.881191 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-hostproc\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881355 kubelet[2612]: I0310 01:39:22.881222 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-host-proc-sys-kernel\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881355 kubelet[2612]: I0310 01:39:22.881249 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4rl9\" (UniqueName: \"kubernetes.io/projected/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-kube-api-access-q4rl9\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881355 kubelet[2612]: I0310 01:39:22.881280 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-etc-cni-netd\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881355 kubelet[2612]: I0310 01:39:22.881303 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-clustermesh-secrets\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.881355 kubelet[2612]: I0310 01:39:22.881332 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43b939ad-2931-4fcc-a8c0-e03e9cee19cc-hubble-tls\") pod \"cilium-vw5jf\" (UID: \"43b939ad-2931-4fcc-a8c0-e03e9cee19cc\") " pod="kube-system/cilium-vw5jf" Mar 10 01:39:22.942081 sshd[4778]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:22.952112 systemd[1]: sshd@35-10.0.0.148:22-10.0.0.1:56744.service: Deactivated successfully. Mar 10 01:39:22.958082 systemd[1]: session-36.scope: Deactivated successfully. Mar 10 01:39:22.976131 systemd-logind[1446]: Session 36 logged out. Waiting for processes to exit. Mar 10 01:39:22.985852 systemd[1]: Started sshd@36-10.0.0.148:22-10.0.0.1:56758.service - OpenSSH per-connection server daemon (10.0.0.1:56758). Mar 10 01:39:22.988739 systemd-logind[1446]: Removed session 36. Mar 10 01:39:23.047962 sshd[4786]: Accepted publickey for core from 10.0.0.1 port 56758 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:39:23.051735 sshd[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:39:23.073274 systemd-logind[1446]: New session 37 of user core. Mar 10 01:39:23.086793 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 10 01:39:23.124701 kubelet[2612]: E0310 01:39:23.124596 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:23.125467 containerd[1456]: time="2026-03-10T01:39:23.125331248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw5jf,Uid:43b939ad-2931-4fcc-a8c0-e03e9cee19cc,Namespace:kube-system,Attempt:0,}" Mar 10 01:39:23.198359 containerd[1456]: time="2026-03-10T01:39:23.197721747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:39:23.198359 containerd[1456]: time="2026-03-10T01:39:23.197906692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:39:23.198359 containerd[1456]: time="2026-03-10T01:39:23.197946486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:39:23.198359 containerd[1456]: time="2026-03-10T01:39:23.198117556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:39:23.236024 systemd[1]: Started cri-containerd-da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d.scope - libcontainer container da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d. Mar 10 01:39:23.316774 containerd[1456]: time="2026-03-10T01:39:23.316666267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw5jf,Uid:43b939ad-2931-4fcc-a8c0-e03e9cee19cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\"" Mar 10 01:39:23.319631 kubelet[2612]: E0310 01:39:23.319577 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:23.335147 containerd[1456]: time="2026-03-10T01:39:23.333725115Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:39:23.397996 containerd[1456]: time="2026-03-10T01:39:23.397844366Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e\"" Mar 10 01:39:23.399219 containerd[1456]: time="2026-03-10T01:39:23.399140599Z" level=info msg="StartContainer for \"b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e\"" Mar 10 01:39:23.466951 systemd[1]: Started cri-containerd-b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e.scope - libcontainer container b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e. Mar 10 01:39:23.531557 containerd[1456]: time="2026-03-10T01:39:23.531212883Z" level=info msg="StartContainer for \"b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e\" returns successfully" Mar 10 01:39:23.563109 systemd[1]: cri-containerd-b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e.scope: Deactivated successfully. Mar 10 01:39:23.624342 containerd[1456]: time="2026-03-10T01:39:23.623676307Z" level=info msg="shim disconnected" id=b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e namespace=k8s.io Mar 10 01:39:23.624342 containerd[1456]: time="2026-03-10T01:39:23.623794249Z" level=warning msg="cleaning up after shim disconnected" id=b0441fec9528500a33751189e0a92b50a96510d489775bf2d8f016a19c66171e namespace=k8s.io Mar 10 01:39:23.624342 containerd[1456]: time="2026-03-10T01:39:23.623803766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:24.356271 kubelet[2612]: E0310 01:39:24.355382 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:24.394591 containerd[1456]: time="2026-03-10T01:39:24.394217191Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:39:24.424590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700033849.mount: Deactivated successfully. Mar 10 01:39:24.430910 containerd[1456]: time="2026-03-10T01:39:24.430613794Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484\"" Mar 10 01:39:24.432340 containerd[1456]: time="2026-03-10T01:39:24.432292632Z" level=info msg="StartContainer for \"993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484\"" Mar 10 01:39:24.495401 kubelet[2612]: E0310 01:39:24.495333 2612 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:39:24.514032 systemd[1]: Started cri-containerd-993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484.scope - libcontainer container 993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484. Mar 10 01:39:24.610097 containerd[1456]: time="2026-03-10T01:39:24.609898739Z" level=info msg="StartContainer for \"993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484\" returns successfully" Mar 10 01:39:24.630733 systemd[1]: cri-containerd-993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484.scope: Deactivated successfully. Mar 10 01:39:24.712387 containerd[1456]: time="2026-03-10T01:39:24.711797936Z" level=info msg="shim disconnected" id=993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484 namespace=k8s.io Mar 10 01:39:24.712387 containerd[1456]: time="2026-03-10T01:39:24.711907310Z" level=warning msg="cleaning up after shim disconnected" id=993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484 namespace=k8s.io Mar 10 01:39:24.712387 containerd[1456]: time="2026-03-10T01:39:24.711923169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:24.999945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-993e4ae799f9a38516a8a225e84e92c94056e9e523c3512aad5fd3b47fa52484-rootfs.mount: Deactivated successfully. Mar 10 01:39:25.376750 kubelet[2612]: E0310 01:39:25.376667 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:25.392037 containerd[1456]: time="2026-03-10T01:39:25.391864882Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:39:25.432379 containerd[1456]: time="2026-03-10T01:39:25.432211784Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6\"" Mar 10 01:39:25.433619 containerd[1456]: time="2026-03-10T01:39:25.433415776Z" level=info msg="StartContainer for \"be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6\"" Mar 10 01:39:25.508277 systemd[1]: Started cri-containerd-be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6.scope - libcontainer container be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6. Mar 10 01:39:25.582081 containerd[1456]: time="2026-03-10T01:39:25.581964939Z" level=info msg="StartContainer for \"be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6\" returns successfully" Mar 10 01:39:25.590882 systemd[1]: cri-containerd-be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6.scope: Deactivated successfully. Mar 10 01:39:25.664701 containerd[1456]: time="2026-03-10T01:39:25.664315253Z" level=info msg="shim disconnected" id=be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6 namespace=k8s.io Mar 10 01:39:25.664701 containerd[1456]: time="2026-03-10T01:39:25.664383600Z" level=warning msg="cleaning up after shim disconnected" id=be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6 namespace=k8s.io Mar 10 01:39:25.664701 containerd[1456]: time="2026-03-10T01:39:25.664397286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:25.996810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be82badaf0a728f03408459e07db3882b35b0228d064d0a9507837e52b48bea6-rootfs.mount: Deactivated successfully. Mar 10 01:39:26.383366 kubelet[2612]: E0310 01:39:26.382907 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:26.400234 containerd[1456]: time="2026-03-10T01:39:26.399599439Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:39:26.436165 containerd[1456]: time="2026-03-10T01:39:26.436017561Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f\"" Mar 10 01:39:26.437765 containerd[1456]: time="2026-03-10T01:39:26.437681549Z" level=info msg="StartContainer for \"391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f\"" Mar 10 01:39:26.504002 systemd[1]: Started cri-containerd-391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f.scope - libcontainer container 391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f. Mar 10 01:39:26.548646 systemd[1]: cri-containerd-391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f.scope: Deactivated successfully. Mar 10 01:39:26.554810 containerd[1456]: time="2026-03-10T01:39:26.554697813Z" level=info msg="StartContainer for \"391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f\" returns successfully" Mar 10 01:39:26.625576 containerd[1456]: time="2026-03-10T01:39:26.625027602Z" level=info msg="shim disconnected" id=391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f namespace=k8s.io Mar 10 01:39:26.625576 containerd[1456]: time="2026-03-10T01:39:26.625095779Z" level=warning msg="cleaning up after shim disconnected" id=391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f namespace=k8s.io Mar 10 01:39:26.625576 containerd[1456]: time="2026-03-10T01:39:26.625108252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:39:26.996050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-391f625bff6ae6016d690e91dc4a7377ec7c8281c3735b74ca1b623bf8e6555f-rootfs.mount: Deactivated successfully. Mar 10 01:39:27.391592 kubelet[2612]: E0310 01:39:27.391340 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:27.402587 containerd[1456]: time="2026-03-10T01:39:27.400496121Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:39:27.438923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185491639.mount: Deactivated successfully. Mar 10 01:39:27.448735 containerd[1456]: time="2026-03-10T01:39:27.448625030Z" level=info msg="CreateContainer within sandbox \"da5b0d04bd76769d0b19eb9e0d11a346de9076317bc0c60e2113a2864e53227d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ca3cfb11c333ae865242b457eef3808391b677630c81ce9622e15f7c02a2fff\"" Mar 10 01:39:27.450061 containerd[1456]: time="2026-03-10T01:39:27.449975506Z" level=info msg="StartContainer for \"5ca3cfb11c333ae865242b457eef3808391b677630c81ce9622e15f7c02a2fff\"" Mar 10 01:39:27.495637 kubelet[2612]: E0310 01:39:27.495408 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:27.507163 systemd[1]: Started cri-containerd-5ca3cfb11c333ae865242b457eef3808391b677630c81ce9622e15f7c02a2fff.scope - libcontainer container 5ca3cfb11c333ae865242b457eef3808391b677630c81ce9622e15f7c02a2fff. Mar 10 01:39:27.566396 containerd[1456]: time="2026-03-10T01:39:27.566270495Z" level=info msg="StartContainer for \"5ca3cfb11c333ae865242b457eef3808391b677630c81ce9622e15f7c02a2fff\" returns successfully" Mar 10 01:39:28.382120 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 10 01:39:28.405664 kubelet[2612]: E0310 01:39:28.405596 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:28.446237 kubelet[2612]: I0310 01:39:28.446019 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vw5jf" podStartSLOduration=6.445998679 podStartE2EDuration="6.445998679s" podCreationTimestamp="2026-03-10 01:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:39:28.442067436 +0000 UTC m=+267.625347915" watchObservedRunningTime="2026-03-10 01:39:28.445998679 +0000 UTC m=+267.629279158" Mar 10 01:39:29.408677 kubelet[2612]: E0310 01:39:29.408639 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:30.411698 kubelet[2612]: E0310 01:39:30.411625 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:32.508641 systemd-networkd[1380]: lxc_health: Link UP Mar 10 01:39:32.524401 systemd-networkd[1380]: lxc_health: Gained carrier Mar 10 01:39:33.120391 kubelet[2612]: E0310 01:39:33.120298 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:33.420554 kubelet[2612]: E0310 01:39:33.419300 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:33.879252 systemd[1]: run-containerd-runc-k8s.io-5ca3cfb11c333ae865242b457eef3808391b677630c81ce9622e15f7c02a2fff-runc.UeR2DJ.mount: Deactivated successfully. Mar 10 01:39:33.891817 systemd-networkd[1380]: lxc_health: Gained IPv6LL Mar 10 01:39:34.422676 kubelet[2612]: E0310 01:39:34.422557 2612 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:39:38.349425 sshd[4786]: pam_unix(sshd:session): session closed for user core Mar 10 01:39:38.359940 systemd[1]: sshd@36-10.0.0.148:22-10.0.0.1:56758.service: Deactivated successfully. Mar 10 01:39:38.362401 systemd[1]: session-37.scope: Deactivated successfully. Mar 10 01:39:38.363833 systemd-logind[1446]: Session 37 logged out. Waiting for processes to exit. Mar 10 01:39:38.366676 systemd-logind[1446]: Removed session 37.