Mar 7 01:55:00.726862 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:55:00.728770 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:55:00.728788 kernel: BIOS-provided physical RAM map: Mar 7 01:55:00.728797 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:55:00.728805 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:55:00.728813 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:55:00.728823 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:55:00.728832 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:55:00.728840 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:55:00.728852 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:55:00.728860 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:55:00.728868 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:55:00.728877 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:55:00.728885 kernel: NX (Execute Disable) protection: active Mar 7 01:55:00.728895 kernel: APIC: Static calls initialized Mar 7 01:55:00.728908 kernel: SMBIOS 2.8 present. Mar 7 01:55:00.728917 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:55:00.728926 kernel: Hypervisor detected: KVM Mar 7 01:55:00.728935 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:55:00.728944 kernel: kvm-clock: using sched offset of 7981806748 cycles Mar 7 01:55:00.728954 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:55:00.728963 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:55:00.728972 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:55:00.728982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:55:00.728994 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:55:00.729004 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:55:00.729013 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:55:00.729023 kernel: Using GB pages for direct mapping Mar 7 01:55:00.729032 kernel: ACPI: Early table checksum verification disabled Mar 7 01:55:00.729041 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:55:00.729050 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729100 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729110 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729123 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:55:00.729133 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729142 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729151 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729161 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:55:00.729170 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:55:00.729180 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:55:00.729194 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:55:00.729207 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:55:00.729216 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:55:00.729226 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:55:00.729236 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:55:00.729246 kernel: No NUMA configuration found Mar 7 01:55:00.729255 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:55:00.729268 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:55:00.729278 kernel: Zone ranges: Mar 7 01:55:00.729287 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:55:00.729297 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:55:00.729307 kernel: Normal empty Mar 7 01:55:00.729316 kernel: Movable zone start for each node Mar 7 01:55:00.729326 kernel: Early memory node ranges Mar 7 01:55:00.729335 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:55:00.729345 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:55:00.729354 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:55:00.729367 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:55:00.729377 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:55:00.729386 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:55:00.729396 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:55:00.729406 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:55:00.729416 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:55:00.729425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:55:00.729435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:55:00.729445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:55:00.729458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:55:00.729467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:55:00.729477 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:55:00.729487 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:55:00.729496 kernel: TSC deadline timer available Mar 7 01:55:00.729506 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:55:00.729516 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:55:00.729525 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:55:00.729535 kernel: kvm-guest: setup PV sched yield Mar 7 01:55:00.729547 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:55:00.729557 kernel: Booting paravirtualized kernel on KVM Mar 7 01:55:00.729567 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:55:00.729577 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:55:00.729587 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:55:00.729597 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:55:00.729606 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:55:00.729616 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:55:00.729625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:55:00.729640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:55:00.729650 kernel: random: crng init done Mar 7 01:55:00.729687 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:55:00.729697 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:55:00.729706 kernel: Fallback order for Node 0: 0 Mar 7 01:55:00.729716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:55:00.729726 kernel: Policy zone: DMA32 Mar 7 01:55:00.729736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:55:00.729750 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 7 01:55:00.729760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:55:00.729770 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:55:00.729779 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:55:00.729789 kernel: Dynamic Preempt: voluntary Mar 7 01:55:00.729799 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:55:00.729814 kernel: rcu: RCU event tracing is enabled. Mar 7 01:55:00.729824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:55:00.729834 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:55:00.729847 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:55:00.729857 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:55:00.729867 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:55:00.729876 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:55:00.729886 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:55:00.729896 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:55:00.729905 kernel: Console: colour VGA+ 80x25 Mar 7 01:55:00.729915 kernel: printk: console [ttyS0] enabled Mar 7 01:55:00.729925 kernel: ACPI: Core revision 20230628 Mar 7 01:55:00.729934 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:55:00.729947 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:55:00.729957 kernel: x2apic enabled Mar 7 01:55:00.729966 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:55:00.729976 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:55:00.729986 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:55:00.729996 kernel: kvm-guest: setup PV IPIs Mar 7 01:55:00.730006 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:55:00.730029 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:55:00.730039 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:55:00.730049 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:55:00.730097 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:55:00.730112 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:55:00.730122 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:55:00.730132 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:55:00.730143 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:55:00.730153 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:55:00.730167 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:55:00.730178 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:55:00.730188 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:55:00.730198 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:55:00.730209 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:55:00.730219 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:55:00.730229 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:55:00.730239 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:55:00.730253 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:55:00.730263 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:55:00.730273 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:55:00.730283 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:55:00.730293 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:55:00.730303 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:55:00.730314 kernel: landlock: Up and running. Mar 7 01:55:00.730324 kernel: SELinux: Initializing. Mar 7 01:55:00.730334 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:55:00.730348 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:55:00.730359 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:55:00.730369 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:55:00.730379 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:55:00.730390 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:55:00.730400 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:55:00.730410 kernel: signal: max sigframe size: 1776 Mar 7 01:55:00.730420 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:55:00.730431 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:55:00.730444 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:55:00.730454 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:55:00.730465 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:55:00.730475 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:55:00.730485 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:55:00.730495 kernel: smpboot: Max logical packages: 1 Mar 7 01:55:00.730506 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:55:00.730516 kernel: devtmpfs: initialized Mar 7 01:55:00.730526 kernel: x86/mm: Memory block size: 128MB Mar 7 01:55:00.730539 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:55:00.730549 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:55:00.730560 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:55:00.730570 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:55:00.730580 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:55:00.730591 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:55:00.730601 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:55:00.730611 kernel: audit: type=2000 audit(1772848497.608:1): state=initialized audit_enabled=0 res=1 Mar 7 01:55:00.730621 kernel: cpuidle: using governor menu Mar 7 01:55:00.730634 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:55:00.730644 kernel: dca service started, version 1.12.1 Mar 7 01:55:00.735888 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:55:00.735914 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:55:00.735926 kernel: PCI: Using configuration type 1 for base access Mar 7 01:55:00.735937 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:55:00.735949 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:55:00.735962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:55:00.735972 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:55:00.735989 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:55:00.736000 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:55:00.736010 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:55:00.736020 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:55:00.736030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:55:00.736041 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:55:00.736051 kernel: ACPI: Interpreter enabled Mar 7 01:55:00.736102 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:55:00.736113 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:55:00.736128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:55:00.736139 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:55:00.736150 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:55:00.736160 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:55:00.736452 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:55:00.736691 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:55:00.739166 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:55:00.739204 kernel: PCI host bridge to bus 0000:00 Mar 7 01:55:00.740761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:55:00.740989 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:55:00.741326 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:55:00.741476 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:55:00.741617 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:55:00.741799 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:55:00.741949 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:55:00.742194 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:55:00.742370 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:55:00.742525 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:55:00.744817 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:55:00.744991 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:55:00.745226 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:55:00.745391 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 16601 usecs Mar 7 01:55:00.745557 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:55:00.745757 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:55:00.745914 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:55:00.746117 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:55:00.746291 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:55:00.746453 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:55:00.746614 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:55:00.746815 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:55:00.746987 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:55:00.747202 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:55:00.747361 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:55:00.747514 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:55:00.750397 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:55:00.750586 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:55:00.750793 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:55:00.750961 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:55:00.751177 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:55:00.751334 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:55:00.751498 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:55:00.751696 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:55:00.751713 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:55:00.751724 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:55:00.751735 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:55:00.751745 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:55:00.751756 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:55:00.751766 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:55:00.751776 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:55:00.751786 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:55:00.751801 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:55:00.751811 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:55:00.751822 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:55:00.751832 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:55:00.751842 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:55:00.751852 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:55:00.751862 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:55:00.751873 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:55:00.751883 kernel: iommu: Default domain type: Translated Mar 7 01:55:00.751897 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:55:00.751907 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:55:00.751917 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:55:00.751927 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:55:00.751938 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:55:00.752145 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:55:00.752304 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:55:00.752455 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:55:00.752472 kernel: vgaarb: loaded Mar 7 01:55:00.752483 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:55:00.752494 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:55:00.752504 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:55:00.752514 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:55:00.752525 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:55:00.752535 kernel: pnp: PnP ACPI init Mar 7 01:55:00.752746 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:55:00.752763 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:55:00.752779 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:55:00.752789 kernel: NET: Registered PF_INET protocol family Mar 7 01:55:00.752799 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:55:00.752810 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:55:00.752820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:55:00.752831 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:55:00.752841 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:55:00.752851 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:55:00.752862 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:55:00.752875 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:55:00.752885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:55:00.752896 kernel: NET: Registered PF_XDP protocol family Mar 7 01:55:00.753042 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:55:00.753248 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:55:00.753391 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:55:00.753530 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:55:00.756748 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:55:00.757008 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:55:00.757028 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:55:00.757341 kernel: Initialise system trusted keyrings Mar 7 01:55:00.757354 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:55:00.757365 kernel: Key type asymmetric registered Mar 7 01:55:00.757376 kernel: Asymmetric key parser 'x509' registered Mar 7 01:55:00.757388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:55:00.757399 kernel: io scheduler mq-deadline registered Mar 7 01:55:00.757410 kernel: io scheduler kyber registered Mar 7 01:55:00.757610 kernel: io scheduler bfq registered Mar 7 01:55:00.757683 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:55:00.757699 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:55:00.757712 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:55:00.757724 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:55:00.757735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:55:00.757746 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:55:00.757758 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:55:00.757769 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:55:00.757785 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:55:00.757975 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:55:00.758257 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:55:00.758419 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:54:59 UTC (1772848499) Mar 7 01:55:00.758571 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:55:00.758586 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:55:00.758598 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:55:00.758609 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:55:00.758627 kernel: Segment Routing with IPv6 Mar 7 01:55:00.758639 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:55:00.758650 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:55:00.758704 kernel: Key type dns_resolver registered Mar 7 01:55:00.758716 kernel: IPI shorthand broadcast: enabled Mar 7 01:55:00.758728 kernel: sched_clock: Marking stable (1685022003, 442131271)->(2690964974, -563811700) Mar 7 01:55:00.758738 kernel: registered taskstats version 1 Mar 7 01:55:00.758748 kernel: Loading compiled-in X.509 certificates Mar 7 01:55:00.758759 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:55:00.758774 kernel: Key type .fscrypt registered Mar 7 01:55:00.758785 kernel: Key type fscrypt-provisioning registered Mar 7 01:55:00.758796 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:55:00.758807 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:55:00.758818 kernel: ima: No architecture policies found Mar 7 01:55:00.758829 kernel: clk: Disabling unused clocks Mar 7 01:55:00.758840 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:55:00.758852 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:55:00.759013 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:55:00.759373 kernel: Run /init as init process Mar 7 01:55:00.759388 kernel: with arguments: Mar 7 01:55:00.759401 kernel: /init Mar 7 01:55:00.759412 kernel: with environment: Mar 7 01:55:00.759423 kernel: HOME=/ Mar 7 01:55:00.759435 kernel: TERM=linux Mar 7 01:55:00.759448 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:55:00.759463 systemd[1]: Detected virtualization kvm. Mar 7 01:55:00.759481 systemd[1]: Detected architecture x86-64. Mar 7 01:55:00.759493 systemd[1]: Running in initrd. Mar 7 01:55:00.759505 systemd[1]: No hostname configured, using default hostname. Mar 7 01:55:00.759516 systemd[1]: Hostname set to . Mar 7 01:55:00.759529 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:55:00.759541 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:55:00.759553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:55:00.759565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:55:00.759583 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:55:00.759596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:55:00.759608 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:55:00.759621 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:55:00.759635 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:55:00.759647 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:55:00.759702 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:55:00.759715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:55:00.759727 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:55:00.759739 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:55:00.759768 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:55:00.759783 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:55:00.759796 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:55:00.759812 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:55:00.759824 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:55:00.759837 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:55:00.759849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:55:00.759862 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:55:00.759874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:55:00.759887 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:55:00.759900 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:55:00.759915 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:55:00.759928 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:55:00.759940 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:55:00.759952 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:55:00.759965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:55:00.759977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:55:00.759989 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:55:00.760002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:55:00.760046 systemd-journald[195]: Collecting audit messages is disabled. Mar 7 01:55:00.760222 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:55:00.760387 systemd-journald[195]: Journal started Mar 7 01:55:00.760551 systemd-journald[195]: Runtime Journal (/run/log/journal/54d2b187b64b49f7ae2f294d8c6dcf9c) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:55:00.763487 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:55:00.757369 systemd-modules-load[196]: Inserted module 'overlay' Mar 7 01:55:00.946801 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:55:00.946838 kernel: Bridge firewalling registered Mar 7 01:55:00.813285 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 7 01:55:00.965904 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:55:00.976295 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:55:00.985850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:55:01.001327 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:55:01.066818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:55:01.069943 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:55:01.097639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:55:01.139477 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:55:01.146500 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:55:01.159603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:55:01.172468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:55:01.181277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:55:01.187021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:55:01.252489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:55:01.287951 dracut-cmdline[228]: dracut-dracut-053 Mar 7 01:55:01.299911 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:55:01.389362 systemd-resolved[231]: Positive Trust Anchors: Mar 7 01:55:01.389403 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:55:01.389455 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:55:01.393413 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 7 01:55:01.416170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:55:01.526430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:55:01.698278 kernel: SCSI subsystem initialized Mar 7 01:55:01.720097 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:55:01.752276 kernel: iscsi: registered transport (tcp) Mar 7 01:55:01.793473 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:55:01.793567 kernel: QLogic iSCSI HBA Driver Mar 7 01:55:01.970267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:55:01.990408 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:55:02.073311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:55:02.073386 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:55:02.077335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:55:02.164457 kernel: raid6: avx2x4 gen() 19189 MB/s Mar 7 01:55:02.184205 kernel: raid6: avx2x2 gen() 18353 MB/s Mar 7 01:55:02.203725 kernel: raid6: avx2x1 gen() 11715 MB/s Mar 7 01:55:02.203775 kernel: raid6: using algorithm avx2x4 gen() 19189 MB/s Mar 7 01:55:02.224509 kernel: raid6: .... xor() 5269 MB/s, rmw enabled Mar 7 01:55:02.224558 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:55:02.254699 kernel: xor: automatically using best checksumming function avx Mar 7 01:55:02.741051 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:55:02.768181 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:55:02.809838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:55:02.848517 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 7 01:55:02.872795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:55:02.904046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:55:02.962240 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 7 01:55:03.067496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:55:03.101214 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:55:03.255421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:55:03.272296 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:55:03.298778 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:55:03.303456 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:55:03.318194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:55:03.322040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:55:03.342352 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:55:03.402850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:55:03.403047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:55:03.419320 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:55:03.430813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:55:03.431205 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:55:03.447135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:55:03.476822 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:55:03.476885 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:55:03.486564 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:55:03.490976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:55:03.520818 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:55:03.520851 kernel: GPT:9289727 != 19775487 Mar 7 01:55:03.520893 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:55:03.520944 kernel: GPT:9289727 != 19775487 Mar 7 01:55:03.520995 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:55:03.521039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:55:03.521351 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:55:03.597118 kernel: libata version 3.00 loaded. Mar 7 01:55:03.690454 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Mar 7 01:55:03.693160 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:55:03.703124 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Mar 7 01:55:03.706402 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:55:03.940879 kernel: AES CTR mode by8 optimization enabled Mar 7 01:55:03.940919 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:55:03.941259 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:55:03.946964 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:55:03.947402 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:55:03.947701 kernel: scsi host0: ahci Mar 7 01:55:03.947982 kernel: scsi host1: ahci Mar 7 01:55:03.952134 kernel: scsi host2: ahci Mar 7 01:55:03.952382 kernel: scsi host3: ahci Mar 7 01:55:03.952623 kernel: scsi host4: ahci Mar 7 01:55:03.952903 kernel: scsi host5: ahci Mar 7 01:55:03.953199 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 01:55:03.953225 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 01:55:03.953240 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 01:55:03.953254 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 01:55:03.953267 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 01:55:03.953281 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 01:55:03.936852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:55:03.965483 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:55:04.007777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:55:04.043458 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:55:04.116531 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:55:04.116574 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:55:04.116592 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:55:04.064460 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:55:04.190871 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:55:04.190913 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:55:04.190930 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:55:04.190962 kernel: ata3.00: applying bridge limits Mar 7 01:55:04.190978 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:55:04.190994 kernel: ata3.00: configured for UDMA/100 Mar 7 01:55:04.191010 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:55:04.145900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:55:04.174841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:55:04.217784 disk-uuid[555]: Primary Header is updated. Mar 7 01:55:04.217784 disk-uuid[555]: Secondary Entries is updated. Mar 7 01:55:04.217784 disk-uuid[555]: Secondary Header is updated. Mar 7 01:55:04.238546 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:55:04.268123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:55:04.335324 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:55:04.509452 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:55:04.509874 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:55:04.543035 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:55:05.285282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:55:05.285348 disk-uuid[557]: The operation has completed successfully. Mar 7 01:55:05.467005 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:55:05.467423 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:55:05.510485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:55:05.536117 sh[592]: Success Mar 7 01:55:05.610134 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:55:05.757613 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:55:05.781616 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:55:05.857528 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:55:05.875220 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:55:05.875262 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:55:05.875281 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:55:05.888339 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:55:05.888417 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:55:05.971745 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:55:05.988559 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:55:06.024340 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:55:06.044385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:55:06.088427 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:55:06.088584 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:55:06.088605 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:55:06.099202 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:55:06.121892 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:55:06.131207 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:55:06.154238 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:55:06.179385 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:55:06.345564 ignition[688]: Ignition 2.19.0 Mar 7 01:55:06.345593 ignition[688]: Stage: fetch-offline Mar 7 01:55:06.345639 ignition[688]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:55:06.345652 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:55:06.345801 ignition[688]: parsed url from cmdline: "" Mar 7 01:55:06.345807 ignition[688]: no config URL provided Mar 7 01:55:06.345815 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:55:06.345828 ignition[688]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:55:06.345863 ignition[688]: op(1): [started] loading QEMU firmware config module Mar 7 01:55:06.345871 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:55:06.382025 ignition[688]: op(1): [finished] loading QEMU firmware config module Mar 7 01:55:06.451656 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:55:06.516259 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:55:06.591948 systemd-networkd[780]: lo: Link UP Mar 7 01:55:06.591983 systemd-networkd[780]: lo: Gained carrier Mar 7 01:55:06.619815 systemd-networkd[780]: Enumeration completed Mar 7 01:55:06.620577 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:55:06.622199 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:55:06.622204 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:55:06.630546 systemd-networkd[780]: eth0: Link UP Mar 7 01:55:06.630553 systemd-networkd[780]: eth0: Gained carrier Mar 7 01:55:06.630571 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:55:06.631054 systemd[1]: Reached target network.target - Network. Mar 7 01:55:06.732030 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:55:06.885238 ignition[688]: parsing config with SHA512: 8635eee7cae802ef99ded07e3940c72ff54d9e8c8b320da2f22d8f791ab8ce20b0a88469503aca28bccd5433cd89852b5fa564b4bea7f1fdd60ed95645efbc05 Mar 7 01:55:06.893360 unknown[688]: fetched base config from "system" Mar 7 01:55:06.893376 unknown[688]: fetched user config from "qemu" Mar 7 01:55:06.894799 ignition[688]: fetch-offline: fetch-offline passed Mar 7 01:55:06.894933 ignition[688]: Ignition finished successfully Mar 7 01:55:06.909495 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.132 Mar 7 01:55:06.909509 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 7 01:55:06.946821 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:55:06.961895 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:55:07.013220 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:55:07.071152 ignition[784]: Ignition 2.19.0 Mar 7 01:55:07.071187 ignition[784]: Stage: kargs Mar 7 01:55:07.071631 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:55:07.071648 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:55:07.084009 ignition[784]: kargs: kargs passed Mar 7 01:55:07.084939 ignition[784]: Ignition finished successfully Mar 7 01:55:07.105476 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:55:07.141836 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:55:07.216434 ignition[792]: Ignition 2.19.0 Mar 7 01:55:07.216467 ignition[792]: Stage: disks Mar 7 01:55:07.221885 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:55:07.222274 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:55:07.232759 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:55:07.225735 ignition[792]: disks: disks passed Mar 7 01:55:07.244588 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:55:07.225813 ignition[792]: Ignition finished successfully Mar 7 01:55:07.261549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:55:07.261629 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:55:07.261734 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:55:07.261793 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:55:07.326368 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:55:07.436413 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:55:07.470555 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:55:07.510866 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:55:08.126377 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:55:08.138604 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:55:08.149953 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:55:08.216797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:55:08.262106 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:55:08.288158 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:55:08.317249 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Mar 7 01:55:08.317308 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:55:08.317329 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:55:08.288249 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:55:08.288291 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:55:08.342493 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:55:08.365574 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:55:08.396003 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:55:08.403553 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:55:08.416666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:55:08.526527 systemd-networkd[780]: eth0: Gained IPv6LL Mar 7 01:55:08.544396 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:55:08.565344 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:55:08.587212 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:55:08.613287 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:55:09.127195 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:55:09.181898 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:55:09.218435 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:55:09.298946 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:55:09.339909 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:55:09.489121 ignition[925]: INFO : Ignition 2.19.0 Mar 7 01:55:09.489121 ignition[925]: INFO : Stage: mount Mar 7 01:55:09.489121 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:55:09.489121 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:55:09.489121 ignition[925]: INFO : mount: mount passed Mar 7 01:55:09.554250 ignition[925]: INFO : Ignition finished successfully Mar 7 01:55:09.514043 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:55:09.609014 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:55:09.642841 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:55:09.699000 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:55:09.735337 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 7 01:55:09.735396 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:55:09.745721 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:55:09.752575 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:55:09.779898 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:55:09.790757 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:55:09.872039 ignition[955]: INFO : Ignition 2.19.0 Mar 7 01:55:09.872039 ignition[955]: INFO : Stage: files Mar 7 01:55:09.887597 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:55:09.887597 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:55:09.900145 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:55:09.915577 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:55:09.915577 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:55:09.951024 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:55:09.956751 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:55:09.956751 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:55:09.955856 unknown[955]: wrote ssh authorized keys file for user: core Mar 7 01:55:09.985730 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:55:09.985730 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:55:09.985730 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:55:09.985730 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:55:10.084721 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:55:10.374157 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:55:10.374157 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:55:10.414975 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:55:10.713136 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 7 01:55:11.269950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:55:11.269950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:55:11.269950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:55:11.269950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:55:11.269950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:55:11.269950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:55:11.446839 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:55:11.745880 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 7 01:55:11.861816 kernel: hrtimer: interrupt took 16100409 ns Mar 7 01:55:13.256343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:55:13.256343 ignition[955]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 7 01:55:13.304043 ignition[955]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:55:13.453934 ignition[955]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:55:13.453934 ignition[955]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:55:13.453934 ignition[955]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:55:13.453934 ignition[955]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:55:13.453934 ignition[955]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:55:13.453934 ignition[955]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:55:13.453934 ignition[955]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:55:13.453934 ignition[955]: INFO : files: files passed Mar 7 01:55:13.453934 ignition[955]: INFO : Ignition finished successfully Mar 7 01:55:13.453300 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:55:13.772990 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:55:13.812895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:55:13.915849 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:55:13.918533 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:55:13.918765 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:55:14.002954 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:55:14.002954 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:55:14.075296 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:55:14.107178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:55:14.107658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:55:14.181473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:55:14.293639 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:55:14.295206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:55:14.313428 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:55:14.315545 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:55:14.315747 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:55:14.332578 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:55:14.427849 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:55:14.521362 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:55:14.622933 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:55:14.656240 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:55:14.674937 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:55:14.708173 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:55:14.708435 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:55:14.742255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:55:14.775181 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:55:14.779011 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:55:14.806261 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:55:14.843255 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:55:14.856884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:55:14.865712 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:55:14.875184 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:55:14.885006 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:55:14.897429 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:55:14.993504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:55:14.993780 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:55:15.049928 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:55:15.064328 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:55:15.076388 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:55:15.087491 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:55:15.149190 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:55:15.149414 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:55:15.199829 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:55:15.206846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:55:15.248370 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:55:15.258495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:55:15.265368 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:55:15.287471 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:55:15.293599 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:55:15.327403 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:55:15.327555 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:55:15.461834 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:55:15.462133 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:55:15.497026 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:55:15.497288 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:55:15.612530 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:55:15.613545 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:55:15.675983 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:55:15.684503 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:55:15.684776 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:55:15.760242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:55:15.790106 ignition[1010]: INFO : Ignition 2.19.0 Mar 7 01:55:15.790106 ignition[1010]: INFO : Stage: umount Mar 7 01:55:15.790106 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:55:15.790106 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:55:15.790106 ignition[1010]: INFO : umount: umount passed Mar 7 01:55:15.790106 ignition[1010]: INFO : Ignition finished successfully Mar 7 01:55:15.861799 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:55:15.862729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:55:15.882912 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:55:15.883152 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:55:15.951398 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:55:15.963971 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:55:16.060978 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:55:16.070948 systemd[1]: Stopped target network.target - Network. Mar 7 01:55:16.071222 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:55:16.071427 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:55:16.071647 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:55:16.078350 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:55:16.082168 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:55:16.082287 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:55:16.082407 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:55:16.082481 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:55:16.082921 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:55:16.083207 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:55:16.089625 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:55:16.090736 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:55:16.097475 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:55:16.097645 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:55:16.113507 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:55:16.113603 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:55:16.169122 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:55:16.169352 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:55:16.305883 systemd-networkd[780]: eth0: DHCPv6 lease lost Mar 7 01:55:16.311927 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:55:16.312106 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:55:16.378780 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:55:16.379182 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:55:16.417776 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:55:16.418167 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:55:16.471978 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:55:16.473775 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:55:16.473946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:55:16.485820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:55:16.485920 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:55:16.506215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:55:16.506334 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:55:16.553944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:55:16.660897 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:55:16.664597 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:55:16.690875 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:55:16.691000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:55:16.695044 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:55:16.695607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:55:16.703327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:55:16.703429 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:55:16.711291 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:55:16.711381 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:55:16.774380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:55:16.774483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:55:16.802606 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:55:16.807850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:55:16.807949 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:55:16.816610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:55:16.816747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:55:16.824948 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:55:16.825796 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:55:16.847509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:55:16.848800 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:55:16.863661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:55:16.904027 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:55:16.953051 systemd[1]: Switching root. Mar 7 01:55:17.007328 systemd-journald[195]: Journal stopped Mar 7 01:55:21.091525 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 7 01:55:21.091623 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:55:21.091660 kernel: SELinux: policy capability open_perms=1 Mar 7 01:55:21.091678 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:55:21.091729 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:55:21.091753 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:55:21.091769 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:55:21.091786 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:55:21.091817 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:55:21.091833 kernel: audit: type=1403 audit(1772848517.807:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:55:21.091851 systemd[1]: Successfully loaded SELinux policy in 133.457ms. Mar 7 01:55:21.091883 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 43.968ms. Mar 7 01:55:21.091901 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:55:21.091917 systemd[1]: Detected virtualization kvm. Mar 7 01:55:21.091934 systemd[1]: Detected architecture x86-64. Mar 7 01:55:21.091951 systemd[1]: Detected first boot. Mar 7 01:55:21.091967 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:55:21.091983 zram_generator::config[1071]: No configuration found. Mar 7 01:55:21.092000 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:55:21.092016 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:55:21.092043 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:55:21.092115 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:55:21.092141 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:55:21.092161 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:55:21.092180 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:55:21.092199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:55:21.092218 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:55:21.092236 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:55:21.092262 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:55:21.092280 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:55:21.092300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:55:21.092321 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:55:21.092338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:55:21.092356 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:55:21.092375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:55:21.092396 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:55:21.092414 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:55:21.092437 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:55:21.092458 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:55:21.092475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:55:21.092494 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:55:21.092513 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:55:21.092530 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:55:21.092547 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:55:21.092568 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:55:21.092593 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:55:21.092612 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:55:21.092631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:55:21.092649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:55:21.092667 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:55:21.092724 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:55:21.092746 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:55:21.092765 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:55:21.092784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:21.092808 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:55:21.092833 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:55:21.092853 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:55:21.092872 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:55:21.092891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:55:21.092912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:55:21.092929 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:55:21.092949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:55:21.092966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:55:21.092990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:55:21.093009 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:55:21.093031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:55:21.093050 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:55:21.093134 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:55:21.093156 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:55:21.093176 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:55:21.093194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:55:21.093219 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:55:21.093241 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:55:21.093259 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:55:21.093277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:21.093330 systemd-journald[1171]: Collecting audit messages is disabled. Mar 7 01:55:21.093364 systemd-journald[1171]: Journal started Mar 7 01:55:21.093403 systemd-journald[1171]: Runtime Journal (/run/log/journal/54d2b187b64b49f7ae2f294d8c6dcf9c) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:55:21.124117 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:55:21.124214 kernel: fuse: init (API version 7.39) Mar 7 01:55:21.132823 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:55:21.161323 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:55:21.174774 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:55:21.180035 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:55:21.184402 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:55:21.189757 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:55:21.195036 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:55:21.210660 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:55:21.216969 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:55:21.217327 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:55:21.223451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:55:21.223759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:55:21.231264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:55:21.231554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:55:21.240301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:55:21.246781 kernel: loop: module loaded Mar 7 01:55:21.254606 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:55:21.267817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:55:21.289611 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:55:21.315204 kernel: ACPI: bus type drm_connector registered Mar 7 01:55:21.318602 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:55:21.328963 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:55:21.351246 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:55:21.359289 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:55:21.376012 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:55:21.378757 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:55:21.397794 systemd-journald[1171]: Time spent on flushing to /var/log/journal/54d2b187b64b49f7ae2f294d8c6dcf9c is 62.146ms for 927 entries. Mar 7 01:55:21.397794 systemd-journald[1171]: System Journal (/var/log/journal/54d2b187b64b49f7ae2f294d8c6dcf9c) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:55:21.519320 systemd-journald[1171]: Received client request to flush runtime journal. Mar 7 01:55:21.403977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:55:21.419830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:55:21.441529 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:55:21.445324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:55:21.465319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:55:21.465615 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:55:21.475791 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:55:21.476142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:55:21.489590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:55:21.497737 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:55:21.534766 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:55:21.541310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:55:21.547446 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:55:21.556944 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:55:21.565826 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:55:21.570889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:55:21.576748 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:55:21.579611 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Mar 7 01:55:21.579640 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Mar 7 01:55:21.585588 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:55:21.590580 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:55:21.602336 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:55:21.609669 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:55:21.674486 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:55:21.692900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:55:21.747943 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Mar 7 01:55:21.747991 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Mar 7 01:55:21.758009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:55:22.593735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:55:22.633743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:55:22.687797 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Mar 7 01:55:22.793173 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:55:22.849386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:55:22.897404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:55:22.992780 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 01:55:23.137386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1252) Mar 7 01:55:23.198433 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:55:23.246121 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:55:23.258825 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:55:23.361508 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:55:23.373642 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:55:23.382758 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:55:23.383647 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:55:23.453282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:55:23.508554 systemd-networkd[1242]: lo: Link UP Mar 7 01:55:23.509159 systemd-networkd[1242]: lo: Gained carrier Mar 7 01:55:23.512212 systemd-networkd[1242]: Enumeration completed Mar 7 01:55:23.513423 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:55:23.513430 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:55:23.515531 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:55:23.516278 systemd-networkd[1242]: eth0: Link UP Mar 7 01:55:23.516382 systemd-networkd[1242]: eth0: Gained carrier Mar 7 01:55:23.516473 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:55:23.557307 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:55:23.569974 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:55:23.597393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:55:23.599239 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:55:24.022397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:55:24.053747 kernel: kvm_amd: TSC scaling supported Mar 7 01:55:24.053842 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:55:24.053896 kernel: kvm_amd: Nested Paging enabled Mar 7 01:55:24.055172 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:55:24.064809 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:55:24.433769 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:55:24.503997 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:55:24.551464 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:55:24.606689 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:55:24.668996 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:55:24.680466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:55:24.714491 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:55:24.764787 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:55:24.855671 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:55:24.875641 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:55:24.895992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:55:24.896051 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:55:24.911912 systemd[1]: Reached target machines.target - Containers. Mar 7 01:55:24.929460 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:55:24.963410 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:55:24.973210 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:55:25.013251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:55:25.022000 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:55:25.064572 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:55:25.093390 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:55:25.125290 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:55:25.146465 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:55:25.167970 kernel: loop0: detected capacity change from 0 to 228704 Mar 7 01:55:25.234351 systemd-networkd[1242]: eth0: Gained IPv6LL Mar 7 01:55:25.253990 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:55:25.257372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:55:25.274486 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:55:25.326003 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:55:25.422292 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:55:25.626972 kernel: loop2: detected capacity change from 0 to 142488 Mar 7 01:55:25.763806 kernel: loop3: detected capacity change from 0 to 228704 Mar 7 01:55:25.872159 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:55:25.954130 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:55:26.020932 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:55:26.022041 (sd-merge)[1308]: Merged extensions into '/usr'. Mar 7 01:55:26.039848 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:55:26.039896 systemd[1]: Reloading... Mar 7 01:55:26.204979 zram_generator::config[1341]: No configuration found. Mar 7 01:55:26.444636 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:55:26.526976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:55:26.642142 systemd[1]: Reloading finished in 601 ms. Mar 7 01:55:26.666303 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:55:26.719312 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:55:26.776478 systemd[1]: Starting ensure-sysext.service... Mar 7 01:55:26.800861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:55:26.815369 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:55:26.816252 systemd[1]: Reloading... Mar 7 01:55:26.918263 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:55:26.925501 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:55:26.929949 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:55:26.930392 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Mar 7 01:55:26.930510 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Mar 7 01:55:26.948997 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:55:26.949371 systemd-tmpfiles[1380]: Skipping /boot Mar 7 01:55:27.036307 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:55:27.036327 systemd-tmpfiles[1380]: Skipping /boot Mar 7 01:55:27.105785 zram_generator::config[1411]: No configuration found. Mar 7 01:55:27.510483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:55:27.716504 systemd[1]: Reloading finished in 898 ms. Mar 7 01:55:27.800683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:55:27.839044 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:55:27.859748 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:55:27.878948 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:55:27.909559 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:55:27.938425 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:55:27.971312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:27.973164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:55:27.984688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:55:28.012545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:55:28.029435 augenrules[1474]: No rules Mar 7 01:55:28.039302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:55:28.046917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:55:28.047352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:28.053177 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:55:28.062794 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:55:28.075191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:55:28.075631 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:55:28.084729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:55:28.085469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:55:28.091842 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:55:28.092469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:55:28.120030 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:55:28.135198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:28.135774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:55:28.140842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:55:28.158457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:55:28.174913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:55:28.178888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:55:28.191683 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:55:28.200008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:28.204438 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:55:28.222498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:55:28.223924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:55:28.236553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:55:28.236953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:55:28.249314 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:55:28.252964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:55:28.264939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:55:28.321412 systemd[1]: Finished ensure-sysext.service. Mar 7 01:55:28.347682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:28.348129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:55:28.371443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:55:28.390018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:55:28.404989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:55:28.417006 systemd-resolved[1462]: Positive Trust Anchors: Mar 7 01:55:28.417514 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:55:28.417620 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:55:28.422847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:55:28.441212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:55:28.447332 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:55:28.458265 systemd-resolved[1462]: Defaulting to hostname 'linux'. Mar 7 01:55:28.466968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:55:28.467019 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:55:28.468418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:55:28.468784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:55:28.480891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:55:28.493511 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:55:28.494581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:55:28.503043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:55:28.505318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:55:28.512993 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:55:28.520614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:55:28.536790 systemd[1]: Reached target network.target - Network. Mar 7 01:55:28.540928 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:55:28.557442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:55:28.571593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:55:28.572975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:55:28.717648 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:55:28.734749 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:55:28.747993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:55:29.315900 systemd-timesyncd[1517]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:55:29.317236 systemd-resolved[1462]: Clock change detected. Flushing caches. Mar 7 01:55:29.320617 systemd-timesyncd[1517]: Initial clock synchronization to Sat 2026-03-07 01:55:29.315681 UTC. Mar 7 01:55:29.331216 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:55:29.345435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:55:29.360844 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:55:29.362114 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:55:29.374286 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:55:29.385466 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:55:29.400441 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:55:29.412061 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:55:29.424759 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:55:29.439250 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:55:29.451202 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:55:29.497694 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:55:29.513218 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:55:29.529446 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:55:29.536505 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:55:29.536600 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:55:29.536642 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:55:29.547065 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:55:29.556982 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:55:29.582701 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:55:29.598393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:55:29.617221 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:55:29.622896 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:55:29.645041 jq[1532]: false Mar 7 01:55:29.644267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:55:29.657407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:55:29.682197 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:55:29.688917 extend-filesystems[1534]: Found loop3 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found loop4 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found loop5 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found sr0 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda1 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda2 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda3 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found usr Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda4 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda6 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda7 Mar 7 01:55:29.696334 extend-filesystems[1534]: Found vda9 Mar 7 01:55:29.696334 extend-filesystems[1534]: Checking size of /dev/vda9 Mar 7 01:55:29.809861 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:55:29.703560 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:55:29.733030 dbus-daemon[1530]: [system] SELinux support is enabled Mar 7 01:55:29.810641 extend-filesystems[1534]: Resized partition /dev/vda9 Mar 7 01:55:29.736341 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:55:29.841422 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:55:29.765353 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:55:29.836836 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:55:29.849734 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:55:29.865463 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:55:29.881368 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:55:29.904184 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1569) Mar 7 01:55:29.910652 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:55:29.972252 jq[1570]: true Mar 7 01:55:29.977542 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:55:29.978084 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:55:29.987811 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:55:29.988316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:55:30.016938 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:55:30.026481 update_engine[1566]: I20260307 01:55:30.026352 1566 main.cc:92] Flatcar Update Engine starting Mar 7 01:55:30.032852 update_engine[1566]: I20260307 01:55:30.032750 1566 update_check_scheduler.cc:74] Next update check in 7m5s Mar 7 01:55:30.048196 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:55:30.071293 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:55:30.071716 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:55:30.115891 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:55:30.115891 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:55:30.115891 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:55:30.151459 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Mar 7 01:55:30.133632 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:55:30.134506 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:55:30.196355 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:55:30.196443 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:55:30.199396 systemd-logind[1563]: New seat seat0. Mar 7 01:55:30.206713 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:55:30.230649 (ntainerd)[1587]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:55:30.238449 jq[1586]: true Mar 7 01:55:30.287746 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:55:30.288872 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:55:30.360823 tar[1582]: linux-amd64/LICENSE Mar 7 01:55:30.361357 tar[1582]: linux-amd64/helm Mar 7 01:55:30.374869 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:55:30.440201 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:55:30.456113 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:55:30.456483 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:55:30.456686 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:55:30.472508 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:55:30.472973 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:55:30.496375 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:55:30.514557 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:55:30.527900 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:55:30.544879 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:55:30.556932 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:55:30.764972 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:55:30.880689 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:55:30.987658 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:55:31.028327 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:55:31.031896 containerd[1587]: time="2026-03-07T01:55:31.030549516Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:55:31.044909 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:55:31.045422 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:55:31.071303 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:55:31.092940 containerd[1587]: time="2026-03-07T01:55:31.089010975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.094495 containerd[1587]: time="2026-03-07T01:55:31.093489135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:55:31.094495 containerd[1587]: time="2026-03-07T01:55:31.093749251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:55:31.098875 containerd[1587]: time="2026-03-07T01:55:31.095889667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099214124Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099250251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099364515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099386104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099725057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099750024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099825656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.099843759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.100000442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.100435394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101107 containerd[1587]: time="2026-03-07T01:55:31.100660354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:55:31.101577 containerd[1587]: time="2026-03-07T01:55:31.100681924Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:55:31.101577 containerd[1587]: time="2026-03-07T01:55:31.100859615Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:55:31.101577 containerd[1587]: time="2026-03-07T01:55:31.100935116Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:55:31.114884 containerd[1587]: time="2026-03-07T01:55:31.114486316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:55:31.114884 containerd[1587]: time="2026-03-07T01:55:31.114615748Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:55:31.114884 containerd[1587]: time="2026-03-07T01:55:31.114661002Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:55:31.114884 containerd[1587]: time="2026-03-07T01:55:31.114684156Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:55:31.114884 containerd[1587]: time="2026-03-07T01:55:31.114707068Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:55:31.115536 containerd[1587]: time="2026-03-07T01:55:31.115414949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:55:31.115953 containerd[1587]: time="2026-03-07T01:55:31.115888544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:55:31.116208 containerd[1587]: time="2026-03-07T01:55:31.116093175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:55:31.116208 containerd[1587]: time="2026-03-07T01:55:31.116197170Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:55:31.116293 containerd[1587]: time="2026-03-07T01:55:31.116223930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:55:31.116293 containerd[1587]: time="2026-03-07T01:55:31.116245761Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116293 containerd[1587]: time="2026-03-07T01:55:31.116267331Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116396 containerd[1587]: time="2026-03-07T01:55:31.116293029Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116396 containerd[1587]: time="2026-03-07T01:55:31.116313928Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116396 containerd[1587]: time="2026-03-07T01:55:31.116341490Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116396 containerd[1587]: time="2026-03-07T01:55:31.116362609Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116396 containerd[1587]: time="2026-03-07T01:55:31.116383307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116518 containerd[1587]: time="2026-03-07T01:55:31.116401381Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:55:31.116518 containerd[1587]: time="2026-03-07T01:55:31.116429483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116518 containerd[1587]: time="2026-03-07T01:55:31.116451444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116518 containerd[1587]: time="2026-03-07T01:55:31.116470149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116518 containerd[1587]: time="2026-03-07T01:55:31.116490368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116508050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116543015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116561921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116581687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116599621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116621052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116640137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116669 containerd[1587]: time="2026-03-07T01:55:31.116658982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116677126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116699648Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116728472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116746967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116762856Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116871178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:55:31.116915 containerd[1587]: time="2026-03-07T01:55:31.116906785Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:55:31.117100 containerd[1587]: time="2026-03-07T01:55:31.116925439Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:55:31.117100 containerd[1587]: time="2026-03-07T01:55:31.116943614Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:55:31.117100 containerd[1587]: time="2026-03-07T01:55:31.116959213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.117100 containerd[1587]: time="2026-03-07T01:55:31.116977787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:55:31.117100 containerd[1587]: time="2026-03-07T01:55:31.116996032Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:55:31.117100 containerd[1587]: time="2026-03-07T01:55:31.117011290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:55:31.120980 containerd[1587]: time="2026-03-07T01:55:31.119469750Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:55:31.120980 containerd[1587]: time="2026-03-07T01:55:31.119577961Z" level=info msg="Connect containerd service" Mar 7 01:55:31.120980 containerd[1587]: time="2026-03-07T01:55:31.119631341Z" level=info msg="using legacy CRI server" Mar 7 01:55:31.120980 containerd[1587]: time="2026-03-07T01:55:31.119644867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:55:31.120980 containerd[1587]: time="2026-03-07T01:55:31.119842726Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:55:31.120980 containerd[1587]: time="2026-03-07T01:55:31.120761592Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:55:31.121401 containerd[1587]: time="2026-03-07T01:55:31.121110262Z" level=info msg="Start subscribing containerd event" Mar 7 01:55:31.121401 containerd[1587]: time="2026-03-07T01:55:31.121238031Z" level=info msg="Start recovering state" Mar 7 01:55:31.121401 containerd[1587]: time="2026-03-07T01:55:31.121322298Z" level=info msg="Start event monitor" Mar 7 01:55:31.121401 containerd[1587]: time="2026-03-07T01:55:31.121355340Z" level=info msg="Start snapshots syncer" Mar 7 01:55:31.121401 containerd[1587]: time="2026-03-07T01:55:31.121368074Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:55:31.121401 containerd[1587]: time="2026-03-07T01:55:31.121383262Z" level=info msg="Start streaming server" Mar 7 01:55:31.126655 containerd[1587]: time="2026-03-07T01:55:31.125616425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:55:31.126655 containerd[1587]: time="2026-03-07T01:55:31.125714017Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:55:31.129760 containerd[1587]: time="2026-03-07T01:55:31.129343303Z" level=info msg="containerd successfully booted in 0.100748s" Mar 7 01:55:31.129632 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:55:31.141879 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:55:31.187423 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:55:31.211569 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:55:31.223312 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:55:32.035872 tar[1582]: linux-amd64/README.md Mar 7 01:55:32.101970 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:55:32.857461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:55:32.870187 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:55:32.889167 systemd[1]: Startup finished in 19.773s (kernel) + 14.645s (userspace) = 34.418s. Mar 7 01:55:32.971013 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:55:34.928802 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:55:34.955091 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). Mar 7 01:55:35.154570 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:35.171417 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:35.237907 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:55:35.262001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:55:35.276026 systemd-logind[1563]: New session 1 of user core. Mar 7 01:55:35.336200 kubelet[1674]: E0307 01:55:35.335673 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:55:35.359250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:55:35.361989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:55:35.366836 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:55:35.429213 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:55:35.449345 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:55:35.815984 systemd[1694]: Queued start job for default target default.target. Mar 7 01:55:35.817213 systemd[1694]: Created slice app.slice - User Application Slice. Mar 7 01:55:35.818038 systemd[1694]: Reached target paths.target - Paths. Mar 7 01:55:35.818058 systemd[1694]: Reached target timers.target - Timers. Mar 7 01:55:35.835352 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:55:35.865636 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:55:35.865756 systemd[1694]: Reached target sockets.target - Sockets. Mar 7 01:55:35.865818 systemd[1694]: Reached target basic.target - Basic System. Mar 7 01:55:35.865928 systemd[1694]: Reached target default.target - Main User Target. Mar 7 01:55:35.865981 systemd[1694]: Startup finished in 386ms. Mar 7 01:55:35.866215 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:55:35.884551 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:55:35.994957 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:56042.service - OpenSSH per-connection server daemon (10.0.0.1:56042). Mar 7 01:55:36.172732 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 56042 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:36.175855 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:36.214626 systemd-logind[1563]: New session 2 of user core. Mar 7 01:55:36.229931 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:55:36.382424 sshd[1706]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:36.441275 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:56046.service - OpenSSH per-connection server daemon (10.0.0.1:56046). Mar 7 01:55:36.443347 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:56042.service: Deactivated successfully. Mar 7 01:55:36.452930 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:55:36.459492 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:55:36.464935 systemd-logind[1563]: Removed session 2. Mar 7 01:55:36.478532 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 56046 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:36.493731 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:36.532434 systemd-logind[1563]: New session 3 of user core. Mar 7 01:55:36.541813 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:55:36.635033 sshd[1711]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:36.654740 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:56052.service - OpenSSH per-connection server daemon (10.0.0.1:56052). Mar 7 01:55:36.656298 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:56046.service: Deactivated successfully. Mar 7 01:55:36.666008 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:55:36.670107 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:55:36.672935 systemd-logind[1563]: Removed session 3. Mar 7 01:55:36.764214 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 56052 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:36.769380 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:36.831520 systemd-logind[1563]: New session 4 of user core. Mar 7 01:55:36.849267 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:55:36.989265 sshd[1719]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:37.036527 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Mar 7 01:55:37.037364 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:56052.service: Deactivated successfully. Mar 7 01:55:37.070069 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:55:37.080056 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:55:37.118499 systemd-logind[1563]: Removed session 4. Mar 7 01:55:37.182534 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:37.216486 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:37.243240 systemd-logind[1563]: New session 5 of user core. Mar 7 01:55:37.255932 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:55:37.412006 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:55:37.412892 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:55:37.453936 sudo[1734]: pam_unix(sudo:session): session closed for user root Mar 7 01:55:37.467272 sshd[1727]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:37.481651 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:56082.service - OpenSSH per-connection server daemon (10.0.0.1:56082). Mar 7 01:55:37.515020 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:56066.service: Deactivated successfully. Mar 7 01:55:37.527231 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:55:37.527358 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:55:37.538268 systemd-logind[1563]: Removed session 5. Mar 7 01:55:37.611723 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 56082 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:37.616725 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:37.641515 systemd-logind[1563]: New session 6 of user core. Mar 7 01:55:37.657301 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:55:37.736957 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:55:37.738015 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:55:37.764287 sudo[1744]: pam_unix(sudo:session): session closed for user root Mar 7 01:55:37.776581 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:55:37.778101 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:55:37.844339 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:55:37.847493 auditctl[1747]: No rules Mar 7 01:55:37.851953 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:55:37.852534 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:55:37.876673 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:55:37.986771 augenrules[1766]: No rules Mar 7 01:55:37.989703 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:55:37.993717 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 7 01:55:38.025685 sshd[1736]: pam_unix(sshd:session): session closed for user core Mar 7 01:55:38.032527 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:56082.service: Deactivated successfully. Mar 7 01:55:38.044098 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:55:38.048245 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:55:38.059611 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:56098.service - OpenSSH per-connection server daemon (10.0.0.1:56098). Mar 7 01:55:38.079430 systemd-logind[1563]: Removed session 6. Mar 7 01:55:38.152017 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 56098 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:55:38.156089 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:55:38.175811 systemd-logind[1563]: New session 7 of user core. Mar 7 01:55:38.196389 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:55:38.287843 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:55:38.288380 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:55:39.223560 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:55:39.227178 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:55:40.519887 dockerd[1796]: time="2026-03-07T01:55:40.515216583Z" level=info msg="Starting up" Mar 7 01:55:40.978047 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1528722018-merged.mount: Deactivated successfully. Mar 7 01:55:41.476437 dockerd[1796]: time="2026-03-07T01:55:41.475722384Z" level=info msg="Loading containers: start." Mar 7 01:55:42.219560 kernel: Initializing XFRM netlink socket Mar 7 01:55:42.948433 systemd-networkd[1242]: docker0: Link UP Mar 7 01:55:43.096058 dockerd[1796]: time="2026-03-07T01:55:43.090766485Z" level=info msg="Loading containers: done." Mar 7 01:55:45.430582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:55:52.370430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:55:52.527237 dockerd[1796]: time="2026-03-07T01:55:52.510506435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:55:52.530634 dockerd[1796]: time="2026-03-07T01:55:52.529883891Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:55:52.530634 dockerd[1796]: time="2026-03-07T01:55:52.530172831Z" level=info msg="Daemon has completed initialization" Mar 7 01:55:53.204864 dockerd[1796]: time="2026-03-07T01:55:53.203074009Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:55:53.203723 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:55:53.614599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:55:53.615088 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:55:54.255290 kubelet[1947]: E0307 01:55:54.254971 1947 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:55:54.296711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:55:54.297302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:55:58.576424 containerd[1587]: time="2026-03-07T01:55:58.574732125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:56:00.304730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93607043.mount: Deactivated successfully. Mar 7 01:56:04.467503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:56:04.518226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:56:05.322706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:56:05.346523 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:56:05.569117 kubelet[2036]: E0307 01:56:05.568651 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:56:05.576396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:56:05.576759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:56:08.617649 containerd[1587]: time="2026-03-07T01:56:08.615296143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:08.622353 containerd[1587]: time="2026-03-07T01:56:08.622192465Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:56:08.628806 containerd[1587]: time="2026-03-07T01:56:08.628709569Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:08.652639 containerd[1587]: time="2026-03-07T01:56:08.652400797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:08.662656 containerd[1587]: time="2026-03-07T01:56:08.659003074Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 10.08417384s" Mar 7 01:56:08.662656 containerd[1587]: time="2026-03-07T01:56:08.659088842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:56:08.673846 containerd[1587]: time="2026-03-07T01:56:08.673378198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:56:15.030904 containerd[1587]: time="2026-03-07T01:56:15.030717329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:15.040361 containerd[1587]: time="2026-03-07T01:56:15.039962648Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:56:15.049933 containerd[1587]: time="2026-03-07T01:56:15.049086213Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:15.077253 containerd[1587]: time="2026-03-07T01:56:15.077022167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:15.089234 containerd[1587]: time="2026-03-07T01:56:15.085422349Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 6.411965295s" Mar 7 01:56:15.089234 containerd[1587]: time="2026-03-07T01:56:15.085501971Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:56:15.097900 containerd[1587]: time="2026-03-07T01:56:15.096724637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:56:15.480934 update_engine[1566]: I20260307 01:56:15.477293 1566 update_attempter.cc:509] Updating boot flags... Mar 7 01:56:15.597628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:56:15.661841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:56:15.731196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2058) Mar 7 01:56:15.925846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2060) Mar 7 01:56:16.752378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:56:16.781687 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:56:17.037568 kubelet[2076]: E0307 01:56:17.036411 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:56:17.060200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:56:17.060556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:56:20.323278 containerd[1587]: time="2026-03-07T01:56:20.323215325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:20.325648 containerd[1587]: time="2026-03-07T01:56:20.325190418Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:56:20.330167 containerd[1587]: time="2026-03-07T01:56:20.329937763Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:20.344166 containerd[1587]: time="2026-03-07T01:56:20.343855789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:20.349269 containerd[1587]: time="2026-03-07T01:56:20.345985475Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 5.249210871s" Mar 7 01:56:20.349269 containerd[1587]: time="2026-03-07T01:56:20.348044550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:56:20.353729 containerd[1587]: time="2026-03-07T01:56:20.353472523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:56:27.238811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:56:27.556661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:56:30.378787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:56:30.418275 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:56:31.495518 kubelet[2107]: E0307 01:56:31.488698 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:56:31.621315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:56:31.621908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:56:33.340493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381209710.mount: Deactivated successfully. Mar 7 01:56:39.236685 containerd[1587]: time="2026-03-07T01:56:39.236448873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:39.240990 containerd[1587]: time="2026-03-07T01:56:39.240914772Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:56:39.244297 containerd[1587]: time="2026-03-07T01:56:39.244208889Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:39.257415 containerd[1587]: time="2026-03-07T01:56:39.257342242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:39.259744 containerd[1587]: time="2026-03-07T01:56:39.259242502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 18.905717885s" Mar 7 01:56:39.259744 containerd[1587]: time="2026-03-07T01:56:39.259381990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:56:39.289177 containerd[1587]: time="2026-03-07T01:56:39.286538734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:56:40.737501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241081608.mount: Deactivated successfully. Mar 7 01:56:41.718075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:56:41.772765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:56:42.657087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:56:42.733018 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:56:43.064476 kubelet[2144]: E0307 01:56:43.064320 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:56:43.072896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:56:43.078425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:56:48.125319 containerd[1587]: time="2026-03-07T01:56:48.122785032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:48.127545 containerd[1587]: time="2026-03-07T01:56:48.126679951Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:56:48.130088 containerd[1587]: time="2026-03-07T01:56:48.130035087Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:48.138731 containerd[1587]: time="2026-03-07T01:56:48.136292232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:48.138731 containerd[1587]: time="2026-03-07T01:56:48.138076944Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 8.849392957s" Mar 7 01:56:48.138731 containerd[1587]: time="2026-03-07T01:56:48.138119151Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:56:48.141616 containerd[1587]: time="2026-03-07T01:56:48.141557139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:56:49.076650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417455744.mount: Deactivated successfully. Mar 7 01:56:49.128231 containerd[1587]: time="2026-03-07T01:56:49.127374585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:49.130792 containerd[1587]: time="2026-03-07T01:56:49.130516328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:56:49.134605 containerd[1587]: time="2026-03-07T01:56:49.133017158Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:49.138986 containerd[1587]: time="2026-03-07T01:56:49.138847242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:49.143611 containerd[1587]: time="2026-03-07T01:56:49.142602482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.000985242s" Mar 7 01:56:49.143611 containerd[1587]: time="2026-03-07T01:56:49.142675458Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:56:49.152522 containerd[1587]: time="2026-03-07T01:56:49.151825438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:56:50.232837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712519560.mount: Deactivated successfully. Mar 7 01:56:53.216930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:56:53.242464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:56:53.775002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:56:53.787336 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:56:53.979879 kubelet[2259]: E0307 01:56:53.979622 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:56:53.992062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:56:53.992494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:56:56.378199 containerd[1587]: time="2026-03-07T01:56:56.377300319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:56.385641 containerd[1587]: time="2026-03-07T01:56:56.385573050Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:56:56.392812 containerd[1587]: time="2026-03-07T01:56:56.392733611Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:56.418903 containerd[1587]: time="2026-03-07T01:56:56.416555156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:56.423996 containerd[1587]: time="2026-03-07T01:56:56.421830066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 7.269930791s" Mar 7 01:56:56.423996 containerd[1587]: time="2026-03-07T01:56:56.423665400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:57:04.216992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:57:04.250579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:57:05.182881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:05.204737 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:57:05.512684 kubelet[2321]: E0307 01:57:05.512608 2321 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:57:05.524819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:57:05.525790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:57:07.081894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:07.100243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:57:07.183961 systemd[1]: Reloading requested from client PID 2339 ('systemctl') (unit session-7.scope)... Mar 7 01:57:07.183988 systemd[1]: Reloading... Mar 7 01:57:07.381463 zram_generator::config[2375]: No configuration found. Mar 7 01:57:07.676320 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:57:07.814380 systemd[1]: Reloading finished in 627 ms. Mar 7 01:57:07.937913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:07.953069 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:57:07.956415 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:57:07.957315 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:57:07.957772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:07.965060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:57:08.348838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:08.366651 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:57:08.640535 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:57:08.640535 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:57:08.641306 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:57:08.641462 kubelet[2442]: I0307 01:57:08.641284 2442 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:57:09.044488 kubelet[2442]: I0307 01:57:09.043016 2442 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:57:09.044488 kubelet[2442]: I0307 01:57:09.043321 2442 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:57:09.045868 kubelet[2442]: I0307 01:57:09.045211 2442 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:57:09.125892 kubelet[2442]: I0307 01:57:09.122976 2442 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:57:09.133429 kubelet[2442]: E0307 01:57:09.133288 2442 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:57:09.135113 kubelet[2442]: E0307 01:57:09.134999 2442 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:57:09.135113 kubelet[2442]: I0307 01:57:09.135066 2442 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:57:09.146641 kubelet[2442]: I0307 01:57:09.146586 2442 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:57:09.147769 kubelet[2442]: I0307 01:57:09.147603 2442 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:57:09.148013 kubelet[2442]: I0307 01:57:09.147627 2442 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:57:09.148013 kubelet[2442]: I0307 01:57:09.148005 2442 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:57:09.148302 kubelet[2442]: I0307 01:57:09.148022 2442 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:57:09.148302 kubelet[2442]: I0307 01:57:09.148246 2442 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:57:09.160373 kubelet[2442]: I0307 01:57:09.159395 2442 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:57:09.160373 kubelet[2442]: I0307 01:57:09.159843 2442 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:57:09.161781 kubelet[2442]: I0307 01:57:09.161301 2442 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:57:09.161781 kubelet[2442]: I0307 01:57:09.161369 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:57:09.171464 kubelet[2442]: I0307 01:57:09.171240 2442 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:57:09.172243 kubelet[2442]: I0307 01:57:09.171915 2442 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:57:09.175087 kubelet[2442]: E0307 01:57:09.174801 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:57:09.175588 kubelet[2442]: E0307 01:57:09.175396 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:57:09.177921 kubelet[2442]: W0307 01:57:09.177044 2442 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:57:09.192119 kubelet[2442]: I0307 01:57:09.190113 2442 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:57:09.192119 kubelet[2442]: I0307 01:57:09.190899 2442 server.go:1289] "Started kubelet" Mar 7 01:57:09.198341 kubelet[2442]: I0307 01:57:09.198032 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:57:09.199334 kubelet[2442]: E0307 01:57:09.197212 2442 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c6f6f7551a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:57:09.19084074 +0000 UTC m=+0.809070090,LastTimestamp:2026-03-07 01:57:09.19084074 +0000 UTC m=+0.809070090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:57:09.199491 kubelet[2442]: I0307 01:57:09.199357 2442 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:57:09.200440 kubelet[2442]: I0307 01:57:09.200413 2442 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:57:09.201811 kubelet[2442]: I0307 01:57:09.200628 2442 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:57:09.201811 kubelet[2442]: I0307 01:57:09.200745 2442 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:57:09.201811 kubelet[2442]: E0307 01:57:09.201301 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:57:09.201811 kubelet[2442]: E0307 01:57:09.201676 2442 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:57:09.201811 kubelet[2442]: E0307 01:57:09.201768 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Mar 7 01:57:09.202116 kubelet[2442]: I0307 01:57:09.202068 2442 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:57:09.203226 kubelet[2442]: I0307 01:57:09.203097 2442 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:57:09.203882 kubelet[2442]: I0307 01:57:09.203601 2442 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:57:09.203882 kubelet[2442]: I0307 01:57:09.203809 2442 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:57:09.205500 kubelet[2442]: I0307 01:57:09.203610 2442 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:57:09.206193 kubelet[2442]: I0307 01:57:09.205503 2442 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:57:09.207197 kubelet[2442]: E0307 01:57:09.206880 2442 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:57:09.210635 kubelet[2442]: I0307 01:57:09.210493 2442 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:57:09.268977 kubelet[2442]: I0307 01:57:09.268615 2442 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:57:09.268977 kubelet[2442]: I0307 01:57:09.268653 2442 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:57:09.268977 kubelet[2442]: I0307 01:57:09.268673 2442 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:57:09.273042 kubelet[2442]: I0307 01:57:09.272955 2442 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:57:09.283102 kubelet[2442]: I0307 01:57:09.281909 2442 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:57:09.286842 kubelet[2442]: I0307 01:57:09.284240 2442 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:57:09.286842 kubelet[2442]: I0307 01:57:09.284303 2442 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:57:09.286842 kubelet[2442]: I0307 01:57:09.284318 2442 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:57:09.286842 kubelet[2442]: E0307 01:57:09.284383 2442 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:57:09.293538 kubelet[2442]: E0307 01:57:09.290597 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:57:09.302238 kubelet[2442]: E0307 01:57:09.301970 2442 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:57:09.361749 kubelet[2442]: I0307 01:57:09.361074 2442 policy_none.go:49] "None policy: Start" Mar 7 01:57:09.361749 kubelet[2442]: I0307 01:57:09.361412 2442 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:57:09.362586 kubelet[2442]: I0307 01:57:09.362113 2442 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:57:09.386893 kubelet[2442]: E0307 01:57:09.386314 2442 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:57:09.394048 kubelet[2442]: E0307 01:57:09.392759 2442 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:57:09.394048 kubelet[2442]: I0307 01:57:09.393396 2442 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:57:09.394048 kubelet[2442]: I0307 01:57:09.393416 2442 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:57:09.396309 kubelet[2442]: I0307 01:57:09.395966 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:57:09.397825 kubelet[2442]: E0307 01:57:09.397530 2442 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:57:09.397825 kubelet[2442]: E0307 01:57:09.397583 2442 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:57:09.402667 kubelet[2442]: E0307 01:57:09.402570 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Mar 7 01:57:09.497508 kubelet[2442]: I0307 01:57:09.497401 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:09.498649 kubelet[2442]: E0307 01:57:09.497886 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 7 01:57:09.608344 kubelet[2442]: I0307 01:57:09.607854 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59b29ae473de813507ca6f7ab8daefa2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"59b29ae473de813507ca6f7ab8daefa2\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:09.608344 kubelet[2442]: I0307 01:57:09.607949 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59b29ae473de813507ca6f7ab8daefa2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"59b29ae473de813507ca6f7ab8daefa2\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:09.608344 kubelet[2442]: I0307 01:57:09.607979 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59b29ae473de813507ca6f7ab8daefa2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"59b29ae473de813507ca6f7ab8daefa2\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:09.615077 kubelet[2442]: E0307 01:57:09.614557 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:09.632176 kubelet[2442]: E0307 01:57:09.632047 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:09.647292 kubelet[2442]: E0307 01:57:09.646886 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:09.707103 kubelet[2442]: I0307 01:57:09.705924 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:09.707103 kubelet[2442]: E0307 01:57:09.706374 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 7 01:57:09.720611 kubelet[2442]: I0307 01:57:09.718375 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:09.720611 kubelet[2442]: I0307 01:57:09.719211 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:09.722339 kubelet[2442]: I0307 01:57:09.721792 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:09.723677 kubelet[2442]: I0307 01:57:09.723398 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:09.723677 kubelet[2442]: I0307 01:57:09.723460 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:09.723677 kubelet[2442]: I0307 01:57:09.723487 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:09.806895 kubelet[2442]: E0307 01:57:09.806789 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Mar 7 01:57:09.918893 kubelet[2442]: E0307 01:57:09.917660 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:09.920723 containerd[1587]: time="2026-03-07T01:57:09.919179452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:59b29ae473de813507ca6f7ab8daefa2,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:09.932993 kubelet[2442]: E0307 01:57:09.932702 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:09.934328 containerd[1587]: time="2026-03-07T01:57:09.934119463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:09.948971 kubelet[2442]: E0307 01:57:09.947597 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:09.949696 containerd[1587]: time="2026-03-07T01:57:09.949636961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:10.033897 kubelet[2442]: E0307 01:57:10.033704 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:57:10.111718 kubelet[2442]: I0307 01:57:10.111607 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:10.113378 kubelet[2442]: E0307 01:57:10.113331 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 7 01:57:10.130594 kubelet[2442]: E0307 01:57:10.130540 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:57:10.173912 kubelet[2442]: E0307 01:57:10.173479 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:57:10.185039 kubelet[2442]: E0307 01:57:10.184707 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:57:10.608231 kubelet[2442]: E0307 01:57:10.607514 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Mar 7 01:57:10.643841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982769505.mount: Deactivated successfully. Mar 7 01:57:10.678845 containerd[1587]: time="2026-03-07T01:57:10.676562625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:57:10.696224 containerd[1587]: time="2026-03-07T01:57:10.695990056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:57:10.698784 containerd[1587]: time="2026-03-07T01:57:10.698565569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:57:10.706277 containerd[1587]: time="2026-03-07T01:57:10.704812885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:57:10.711335 containerd[1587]: time="2026-03-07T01:57:10.711224956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:57:10.719020 containerd[1587]: time="2026-03-07T01:57:10.713463965Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:57:10.723108 containerd[1587]: time="2026-03-07T01:57:10.722288839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:57:10.736694 containerd[1587]: time="2026-03-07T01:57:10.734320420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:57:10.736694 containerd[1587]: time="2026-03-07T01:57:10.735980271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 816.679488ms" Mar 7 01:57:10.753991 containerd[1587]: time="2026-03-07T01:57:10.753548016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 803.806087ms" Mar 7 01:57:10.755061 containerd[1587]: time="2026-03-07T01:57:10.754951316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 820.669613ms" Mar 7 01:57:10.916822 kubelet[2442]: I0307 01:57:10.916650 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:10.920437 kubelet[2442]: E0307 01:57:10.920360 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 7 01:57:11.337209 kubelet[2442]: E0307 01:57:11.335227 2442 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:57:11.974768 kubelet[2442]: E0307 01:57:11.974597 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:57:12.024117 containerd[1587]: time="2026-03-07T01:57:12.019932885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:12.024117 containerd[1587]: time="2026-03-07T01:57:12.020294488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:12.024117 containerd[1587]: time="2026-03-07T01:57:12.020314768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:12.024117 containerd[1587]: time="2026-03-07T01:57:12.021456400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:12.099908 containerd[1587]: time="2026-03-07T01:57:12.097056096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:12.099908 containerd[1587]: time="2026-03-07T01:57:12.097207354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:12.099908 containerd[1587]: time="2026-03-07T01:57:12.097237383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:12.099908 containerd[1587]: time="2026-03-07T01:57:12.097420534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:12.152370 containerd[1587]: time="2026-03-07T01:57:12.151947701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:12.152370 containerd[1587]: time="2026-03-07T01:57:12.152051506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:12.152370 containerd[1587]: time="2026-03-07T01:57:12.152072237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:12.152370 containerd[1587]: time="2026-03-07T01:57:12.152228535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:12.218254 kubelet[2442]: E0307 01:57:12.211432 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="3.2s" Mar 7 01:57:12.546313 kubelet[2442]: I0307 01:57:12.544590 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:12.546313 kubelet[2442]: E0307 01:57:12.545326 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 7 01:57:12.862417 containerd[1587]: time="2026-03-07T01:57:12.862249250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bac86440d08a516637ee0f0cb3e51cce64d7df4de58ca349b490db0bbac1dc0\"" Mar 7 01:57:12.869607 kubelet[2442]: E0307 01:57:12.867541 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:12.913869 containerd[1587]: time="2026-03-07T01:57:12.910882122Z" level=info msg="CreateContainer within sandbox \"6bac86440d08a516637ee0f0cb3e51cce64d7df4de58ca349b490db0bbac1dc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:57:12.922071 kubelet[2442]: E0307 01:57:12.921780 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:57:13.014820 containerd[1587]: time="2026-03-07T01:57:13.014677452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7953893640e20e534474b585aef5bae83062ce76cf4713e209f48f3e9490bae\"" Mar 7 01:57:13.020280 kubelet[2442]: E0307 01:57:13.020208 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:13.054814 containerd[1587]: time="2026-03-07T01:57:13.054523525Z" level=info msg="CreateContainer within sandbox \"a7953893640e20e534474b585aef5bae83062ce76cf4713e209f48f3e9490bae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:57:13.074455 systemd[1]: run-containerd-runc-k8s.io-df4f7f37be86baa49bf2757b036834899035acfa4f58fd2fe65647cc923190c3-runc.SXs93V.mount: Deactivated successfully. Mar 7 01:57:13.091953 containerd[1587]: time="2026-03-07T01:57:13.091545894Z" level=info msg="CreateContainer within sandbox \"6bac86440d08a516637ee0f0cb3e51cce64d7df4de58ca349b490db0bbac1dc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97f15cd794cbbfdcc998139dbc166d906e2084e28a93c0ff0809d008146e6e38\"" Mar 7 01:57:13.098511 containerd[1587]: time="2026-03-07T01:57:13.097949906Z" level=info msg="StartContainer for \"97f15cd794cbbfdcc998139dbc166d906e2084e28a93c0ff0809d008146e6e38\"" Mar 7 01:57:13.104879 kubelet[2442]: E0307 01:57:13.104526 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:57:13.133279 containerd[1587]: time="2026-03-07T01:57:13.130518556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:59b29ae473de813507ca6f7ab8daefa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4f7f37be86baa49bf2757b036834899035acfa4f58fd2fe65647cc923190c3\"" Mar 7 01:57:13.133404 kubelet[2442]: E0307 01:57:13.131687 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:13.160448 containerd[1587]: time="2026-03-07T01:57:13.157935039Z" level=info msg="CreateContainer within sandbox \"df4f7f37be86baa49bf2757b036834899035acfa4f58fd2fe65647cc923190c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:57:13.300975 kubelet[2442]: E0307 01:57:13.300788 2442 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:57:13.317233 containerd[1587]: time="2026-03-07T01:57:13.316618220Z" level=info msg="CreateContainer within sandbox \"a7953893640e20e534474b585aef5bae83062ce76cf4713e209f48f3e9490bae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e6a5f0a4da3013caaf28cbc0cbec389cc24c15906d95ada710533cc246cb1b0\"" Mar 7 01:57:13.319602 containerd[1587]: time="2026-03-07T01:57:13.318997312Z" level=info msg="StartContainer for \"9e6a5f0a4da3013caaf28cbc0cbec389cc24c15906d95ada710533cc246cb1b0\"" Mar 7 01:57:13.475739 containerd[1587]: time="2026-03-07T01:57:13.475685076Z" level=info msg="CreateContainer within sandbox \"df4f7f37be86baa49bf2757b036834899035acfa4f58fd2fe65647cc923190c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95467185e75c8f0789138150a93812c7590f42a69481ceeec120576527d33d2e\"" Mar 7 01:57:13.495773 containerd[1587]: time="2026-03-07T01:57:13.492677245Z" level=info msg="StartContainer for \"95467185e75c8f0789138150a93812c7590f42a69481ceeec120576527d33d2e\"" Mar 7 01:57:14.205590 containerd[1587]: time="2026-03-07T01:57:14.205489111Z" level=info msg="StartContainer for \"97f15cd794cbbfdcc998139dbc166d906e2084e28a93c0ff0809d008146e6e38\" returns successfully" Mar 7 01:57:14.410311 containerd[1587]: time="2026-03-07T01:57:14.408611324Z" level=info msg="StartContainer for \"95467185e75c8f0789138150a93812c7590f42a69481ceeec120576527d33d2e\" returns successfully" Mar 7 01:57:14.976086 containerd[1587]: time="2026-03-07T01:57:14.975952019Z" level=info msg="StartContainer for \"9e6a5f0a4da3013caaf28cbc0cbec389cc24c15906d95ada710533cc246cb1b0\" returns successfully" Mar 7 01:57:14.992922 kubelet[2442]: E0307 01:57:14.991835 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:14.999663 kubelet[2442]: E0307 01:57:14.998878 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:15.020613 kubelet[2442]: E0307 01:57:15.014915 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:15.020613 kubelet[2442]: E0307 01:57:15.015197 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:15.025505 kubelet[2442]: E0307 01:57:15.025472 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:15.027973 kubelet[2442]: E0307 01:57:15.027895 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:15.759623 kubelet[2442]: I0307 01:57:15.758219 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:16.065605 kubelet[2442]: E0307 01:57:16.064812 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:16.065605 kubelet[2442]: E0307 01:57:16.065002 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:16.079265 kubelet[2442]: E0307 01:57:16.078789 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:16.079265 kubelet[2442]: E0307 01:57:16.079027 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:16.105292 kubelet[2442]: E0307 01:57:16.104507 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:16.105292 kubelet[2442]: E0307 01:57:16.105027 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:17.066184 kubelet[2442]: E0307 01:57:17.065596 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:17.066184 kubelet[2442]: E0307 01:57:17.065833 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:17.078064 kubelet[2442]: E0307 01:57:17.077928 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:17.078267 kubelet[2442]: E0307 01:57:17.078233 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:18.080496 kubelet[2442]: E0307 01:57:18.073309 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:18.080496 kubelet[2442]: E0307 01:57:18.073543 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:19.397949 kubelet[2442]: E0307 01:57:19.397843 2442 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:57:21.357693 kubelet[2442]: E0307 01:57:21.357613 2442 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:57:21.447625 kubelet[2442]: E0307 01:57:21.447466 2442 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:57:21.447792 kubelet[2442]: E0307 01:57:21.447740 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:21.539867 kubelet[2442]: I0307 01:57:21.539599 2442 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:57:21.539867 kubelet[2442]: E0307 01:57:21.539644 2442 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:57:21.546773 kubelet[2442]: E0307 01:57:21.544460 2442 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6c6f6f7551a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:57:09.19084074 +0000 UTC m=+0.809070090,LastTimestamp:2026-03-07 01:57:09.19084074 +0000 UTC m=+0.809070090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:57:21.603523 kubelet[2442]: I0307 01:57:21.602766 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:21.647307 kubelet[2442]: E0307 01:57:21.646249 2442 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:21.647307 kubelet[2442]: I0307 01:57:21.646292 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:21.649618 kubelet[2442]: E0307 01:57:21.649594 2442 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:21.649812 kubelet[2442]: I0307 01:57:21.649710 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:21.652599 kubelet[2442]: E0307 01:57:21.652558 2442 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:22.206430 kubelet[2442]: I0307 01:57:22.202258 2442 apiserver.go:52] "Watching apiserver" Mar 7 01:57:22.320036 kubelet[2442]: I0307 01:57:22.309920 2442 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:57:27.014108 kubelet[2442]: I0307 01:57:27.014006 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:27.347098 kubelet[2442]: E0307 01:57:27.343781 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:28.340540 kubelet[2442]: E0307 01:57:28.339693 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:31.133203 systemd[1]: Reloading requested from client PID 2731 ('systemctl') (unit session-7.scope)... Mar 7 01:57:31.133329 systemd[1]: Reloading... Mar 7 01:57:31.510651 kubelet[2442]: I0307 01:57:31.510582 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:31.646791 kubelet[2442]: I0307 01:57:31.638651 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.638560892 podStartE2EDuration="4.638560892s" podCreationTimestamp="2026-03-07 01:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:57:29.82554027 +0000 UTC m=+21.443769619" watchObservedRunningTime="2026-03-07 01:57:31.638560892 +0000 UTC m=+23.256790211" Mar 7 01:57:31.717260 kubelet[2442]: E0307 01:57:31.717203 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:32.042009 zram_generator::config[2776]: No configuration found. Mar 7 01:57:32.430457 kubelet[2442]: E0307 01:57:32.398091 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:32.571221 kubelet[2442]: E0307 01:57:32.568540 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:32.656252 kubelet[2442]: I0307 01:57:32.653000 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.652904284 podStartE2EDuration="1.652904284s" podCreationTimestamp="2026-03-07 01:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:57:32.652104763 +0000 UTC m=+24.270334122" watchObservedRunningTime="2026-03-07 01:57:32.652904284 +0000 UTC m=+24.271133603" Mar 7 01:57:33.405427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:57:33.874043 systemd[1]: Reloading finished in 2739 ms. Mar 7 01:57:34.027296 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:57:34.082482 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:57:34.083082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:34.144041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:57:35.043530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:57:35.203716 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:57:35.723944 kubelet[2825]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:57:35.723944 kubelet[2825]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:57:35.723944 kubelet[2825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:57:35.723944 kubelet[2825]: I0307 01:57:35.718969 2825 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:57:35.732869 kubelet[2825]: I0307 01:57:35.732772 2825 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:57:35.732869 kubelet[2825]: I0307 01:57:35.732829 2825 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:57:35.736232 kubelet[2825]: I0307 01:57:35.733335 2825 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:57:35.741186 kubelet[2825]: I0307 01:57:35.739217 2825 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:57:35.780543 kubelet[2825]: I0307 01:57:35.775603 2825 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:57:35.810365 kubelet[2825]: E0307 01:57:35.801258 2825 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:57:35.826186 kubelet[2825]: I0307 01:57:35.821112 2825 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:57:35.855615 kubelet[2825]: I0307 01:57:35.855433 2825 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:57:35.857780 kubelet[2825]: I0307 01:57:35.856714 2825 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:57:35.859985 kubelet[2825]: I0307 01:57:35.857772 2825 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:57:35.859985 kubelet[2825]: I0307 01:57:35.858040 2825 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:57:35.859985 kubelet[2825]: I0307 01:57:35.858058 2825 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:57:35.859985 kubelet[2825]: I0307 01:57:35.858190 2825 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:57:35.859985 kubelet[2825]: I0307 01:57:35.858611 2825 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:57:35.860373 kubelet[2825]: I0307 01:57:35.858734 2825 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:57:35.860373 kubelet[2825]: I0307 01:57:35.858783 2825 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:57:35.860373 kubelet[2825]: I0307 01:57:35.858804 2825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:57:35.866940 kubelet[2825]: I0307 01:57:35.862976 2825 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:57:35.866940 kubelet[2825]: I0307 01:57:35.863808 2825 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:57:35.906692 kubelet[2825]: I0307 01:57:35.894304 2825 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:57:35.906692 kubelet[2825]: I0307 01:57:35.894363 2825 server.go:1289] "Started kubelet" Mar 7 01:57:35.909494 kubelet[2825]: I0307 01:57:35.909455 2825 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:57:35.912162 kubelet[2825]: I0307 01:57:35.911956 2825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:57:35.921498 kubelet[2825]: I0307 01:57:35.920838 2825 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:57:35.921498 kubelet[2825]: I0307 01:57:35.921007 2825 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:57:35.923529 kubelet[2825]: E0307 01:57:35.923309 2825 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:57:35.930192 kubelet[2825]: I0307 01:57:35.927924 2825 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:57:35.938051 kubelet[2825]: I0307 01:57:35.937543 2825 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:57:35.947958 kubelet[2825]: I0307 01:57:35.935578 2825 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:57:35.955274 kubelet[2825]: I0307 01:57:35.950832 2825 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:57:35.975095 kubelet[2825]: I0307 01:57:35.970318 2825 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:57:35.999592 kubelet[2825]: I0307 01:57:35.997300 2825 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:57:35.999592 kubelet[2825]: I0307 01:57:35.998687 2825 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:57:36.029086 kubelet[2825]: E0307 01:57:36.025455 2825 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:57:36.029086 kubelet[2825]: E0307 01:57:36.027421 2825 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:57:36.035107 kubelet[2825]: I0307 01:57:36.034328 2825 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:57:36.185384 kubelet[2825]: I0307 01:57:36.184048 2825 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:57:36.235316 kubelet[2825]: I0307 01:57:36.208209 2825 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:57:36.235316 kubelet[2825]: I0307 01:57:36.208346 2825 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:57:36.235316 kubelet[2825]: I0307 01:57:36.209115 2825 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:57:36.235316 kubelet[2825]: I0307 01:57:36.228631 2825 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:57:36.235316 kubelet[2825]: E0307 01:57:36.228698 2825 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:57:36.335994 kubelet[2825]: E0307 01:57:36.329329 2825 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:57:36.414708 sudo[2864]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:57:36.415443 sudo[2864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:57:36.538429 kubelet[2825]: E0307 01:57:36.533826 2825 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:57:36.558827 kubelet[2825]: I0307 01:57:36.558797 2825 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:57:36.560466 kubelet[2825]: I0307 01:57:36.560447 2825 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:57:36.560624 kubelet[2825]: I0307 01:57:36.560608 2825 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:57:36.560946 kubelet[2825]: I0307 01:57:36.560927 2825 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:57:36.561048 kubelet[2825]: I0307 01:57:36.561016 2825 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:57:36.561229 kubelet[2825]: I0307 01:57:36.561214 2825 policy_none.go:49] "None policy: Start" Mar 7 01:57:36.561316 kubelet[2825]: I0307 01:57:36.561304 2825 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:57:36.561403 kubelet[2825]: I0307 01:57:36.561390 2825 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:57:36.561688 kubelet[2825]: I0307 01:57:36.561593 2825 state_mem.go:75] "Updated machine memory state" Mar 7 01:57:36.568022 kubelet[2825]: E0307 01:57:36.567237 2825 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:57:36.568022 kubelet[2825]: I0307 01:57:36.567470 2825 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:57:36.568022 kubelet[2825]: I0307 01:57:36.567484 2825 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:57:36.568022 kubelet[2825]: I0307 01:57:36.567871 2825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:57:36.574700 kubelet[2825]: I0307 01:57:36.574670 2825 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:57:36.591584 kubelet[2825]: E0307 01:57:36.583210 2825 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:57:36.591709 containerd[1587]: time="2026-03-07T01:57:36.591101668Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:57:36.596790 kubelet[2825]: I0307 01:57:36.596763 2825 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:57:36.737727 kubelet[2825]: I0307 01:57:36.737684 2825 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:57:36.846358 kubelet[2825]: I0307 01:57:36.845280 2825 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:57:36.846358 kubelet[2825]: I0307 01:57:36.845374 2825 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:57:36.881521 kubelet[2825]: I0307 01:57:36.881470 2825 apiserver.go:52] "Watching apiserver" Mar 7 01:57:36.968190 kubelet[2825]: I0307 01:57:36.959551 2825 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:36.968190 kubelet[2825]: I0307 01:57:36.960805 2825 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:37.019001 kubelet[2825]: E0307 01:57:37.017229 2825 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:37.029188 kubelet[2825]: I0307 01:57:37.027225 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:57:37.129021 kubelet[2825]: I0307 01:57:37.128287 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59b29ae473de813507ca6f7ab8daefa2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"59b29ae473de813507ca6f7ab8daefa2\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:37.129021 kubelet[2825]: I0307 01:57:37.128356 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:37.129021 kubelet[2825]: I0307 01:57:37.128388 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e073c4c-dd1b-46eb-96ad-81a25bb7cebf-kube-proxy\") pod \"kube-proxy-ppg4p\" (UID: \"6e073c4c-dd1b-46eb-96ad-81a25bb7cebf\") " pod="kube-system/kube-proxy-ppg4p" Mar 7 01:57:37.129021 kubelet[2825]: I0307 01:57:37.128417 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e073c4c-dd1b-46eb-96ad-81a25bb7cebf-xtables-lock\") pod \"kube-proxy-ppg4p\" (UID: \"6e073c4c-dd1b-46eb-96ad-81a25bb7cebf\") " pod="kube-system/kube-proxy-ppg4p" Mar 7 01:57:37.129021 kubelet[2825]: I0307 01:57:37.128451 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e073c4c-dd1b-46eb-96ad-81a25bb7cebf-lib-modules\") pod \"kube-proxy-ppg4p\" (UID: \"6e073c4c-dd1b-46eb-96ad-81a25bb7cebf\") " pod="kube-system/kube-proxy-ppg4p" Mar 7 01:57:37.129404 kubelet[2825]: I0307 01:57:37.128476 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59b29ae473de813507ca6f7ab8daefa2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"59b29ae473de813507ca6f7ab8daefa2\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:37.129404 kubelet[2825]: I0307 01:57:37.128503 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59b29ae473de813507ca6f7ab8daefa2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"59b29ae473de813507ca6f7ab8daefa2\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:57:37.129404 kubelet[2825]: I0307 01:57:37.128531 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:37.129404 kubelet[2825]: I0307 01:57:37.128568 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:37.129404 kubelet[2825]: I0307 01:57:37.128592 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:37.129649 kubelet[2825]: I0307 01:57:37.128615 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:57:37.129649 kubelet[2825]: I0307 01:57:37.128644 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5ktx\" (UniqueName: \"kubernetes.io/projected/6e073c4c-dd1b-46eb-96ad-81a25bb7cebf-kube-api-access-r5ktx\") pod \"kube-proxy-ppg4p\" (UID: \"6e073c4c-dd1b-46eb-96ad-81a25bb7cebf\") " pod="kube-system/kube-proxy-ppg4p" Mar 7 01:57:37.239384 kubelet[2825]: E0307 01:57:37.236316 2825 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Mar 7 01:57:37.239384 kubelet[2825]: E0307 01:57:37.236409 2825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e073c4c-dd1b-46eb-96ad-81a25bb7cebf-kube-proxy podName:6e073c4c-dd1b-46eb-96ad-81a25bb7cebf nodeName:}" failed. No retries permitted until 2026-03-07 01:57:37.736374822 +0000 UTC m=+2.484066940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6e073c4c-dd1b-46eb-96ad-81a25bb7cebf-kube-proxy") pod "kube-proxy-ppg4p" (UID: "6e073c4c-dd1b-46eb-96ad-81a25bb7cebf") : object "kube-system"/"kube-proxy" not registered Mar 7 01:57:37.273425 kubelet[2825]: E0307 01:57:37.272904 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:37.317098 kubelet[2825]: E0307 01:57:37.308060 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:37.320824 kubelet[2825]: E0307 01:57:37.320794 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:37.329488 kubelet[2825]: I0307 01:57:37.329458 2825 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:57:37.363833 kubelet[2825]: E0307 01:57:37.363798 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:37.622462 kubelet[2825]: I0307 01:57:37.616791 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.616772442 podStartE2EDuration="616.772442ms" podCreationTimestamp="2026-03-07 01:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:57:37.549371422 +0000 UTC m=+2.297063570" watchObservedRunningTime="2026-03-07 01:57:37.616772442 +0000 UTC m=+2.364464560" Mar 7 01:57:37.909712 kubelet[2825]: E0307 01:57:37.904817 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:37.917293 containerd[1587]: time="2026-03-07T01:57:37.917208718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppg4p,Uid:6e073c4c-dd1b-46eb-96ad-81a25bb7cebf,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:38.071794 containerd[1587]: time="2026-03-07T01:57:38.067765368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:38.071794 containerd[1587]: time="2026-03-07T01:57:38.068112434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:38.071794 containerd[1587]: time="2026-03-07T01:57:38.068230812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:38.071794 containerd[1587]: time="2026-03-07T01:57:38.068442809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:38.350297 kubelet[2825]: E0307 01:57:38.345941 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:38.357625 containerd[1587]: time="2026-03-07T01:57:38.354023302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppg4p,Uid:6e073c4c-dd1b-46eb-96ad-81a25bb7cebf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3049933ffb7b1fd337e9108b8fe4ea1909cd4eaa618247df7b508bcab3830a55\"" Mar 7 01:57:38.375307 kubelet[2825]: E0307 01:57:38.360845 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:38.403054 kubelet[2825]: E0307 01:57:38.402941 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:38.537195 containerd[1587]: time="2026-03-07T01:57:38.534410651Z" level=info msg="CreateContainer within sandbox \"3049933ffb7b1fd337e9108b8fe4ea1909cd4eaa618247df7b508bcab3830a55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:57:38.709830 containerd[1587]: time="2026-03-07T01:57:38.709656073Z" level=info msg="CreateContainer within sandbox \"3049933ffb7b1fd337e9108b8fe4ea1909cd4eaa618247df7b508bcab3830a55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3687784904c2819e7fee3c31e6e9c9c15d3b6f8b99c355bcc06556e4c7a12c69\"" Mar 7 01:57:38.721189 containerd[1587]: time="2026-03-07T01:57:38.715541289Z" level=info msg="StartContainer for \"3687784904c2819e7fee3c31e6e9c9c15d3b6f8b99c355bcc06556e4c7a12c69\"" Mar 7 01:57:39.155302 containerd[1587]: time="2026-03-07T01:57:39.149545350Z" level=info msg="StartContainer for \"3687784904c2819e7fee3c31e6e9c9c15d3b6f8b99c355bcc06556e4c7a12c69\" returns successfully" Mar 7 01:57:39.408986 kubelet[2825]: E0307 01:57:39.403347 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:39.413554 kubelet[2825]: E0307 01:57:39.412708 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:39.413554 kubelet[2825]: E0307 01:57:39.413356 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:39.987909 sudo[2864]: pam_unix(sudo:session): session closed for user root Mar 7 01:57:40.924620 kubelet[2825]: E0307 01:57:40.913977 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:41.068648 kubelet[2825]: I0307 01:57:41.064865 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ppg4p" podStartSLOduration=5.064842086 podStartE2EDuration="5.064842086s" podCreationTimestamp="2026-03-07 01:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:57:39.497414079 +0000 UTC m=+4.245106197" watchObservedRunningTime="2026-03-07 01:57:41.064842086 +0000 UTC m=+5.812534205" Mar 7 01:57:41.413519 kubelet[2825]: E0307 01:57:41.411984 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:43.681034 kubelet[2825]: E0307 01:57:43.666690 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:43.687694 kubelet[2825]: I0307 01:57:43.682004 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-run\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.687694 kubelet[2825]: I0307 01:57:43.682045 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-bpf-maps\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.687694 kubelet[2825]: I0307 01:57:43.682067 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-clustermesh-secrets\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.687694 kubelet[2825]: I0307 01:57:43.682095 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-etc-cni-netd\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.687694 kubelet[2825]: I0307 01:57:43.682115 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hubble-tls\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.687694 kubelet[2825]: I0307 01:57:43.682198 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b976r\" (UniqueName: \"kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-kube-api-access-b976r\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688109 kubelet[2825]: I0307 01:57:43.682223 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hostproc\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688109 kubelet[2825]: I0307 01:57:43.682256 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-cgroup\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688109 kubelet[2825]: I0307 01:57:43.682315 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cni-path\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688109 kubelet[2825]: I0307 01:57:43.682339 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-xtables-lock\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688109 kubelet[2825]: I0307 01:57:43.682367 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-lib-modules\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688109 kubelet[2825]: I0307 01:57:43.682389 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-config-path\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688493 kubelet[2825]: I0307 01:57:43.682411 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-net\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.688493 kubelet[2825]: I0307 01:57:43.682482 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-kernel\") pod \"cilium-kzkkn\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " pod="kube-system/cilium-kzkkn" Mar 7 01:57:43.786560 kubelet[2825]: I0307 01:57:43.785534 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec146893-f95d-4382-9335-1194bbe341a8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fdmbb\" (UID: \"ec146893-f95d-4382-9335-1194bbe341a8\") " pod="kube-system/cilium-operator-6c4d7847fc-fdmbb" Mar 7 01:57:43.786560 kubelet[2825]: I0307 01:57:43.785676 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvrpz\" (UniqueName: \"kubernetes.io/projected/ec146893-f95d-4382-9335-1194bbe341a8-kube-api-access-dvrpz\") pod \"cilium-operator-6c4d7847fc-fdmbb\" (UID: \"ec146893-f95d-4382-9335-1194bbe341a8\") " pod="kube-system/cilium-operator-6c4d7847fc-fdmbb" Mar 7 01:57:44.002200 kubelet[2825]: E0307 01:57:44.000622 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:44.002406 containerd[1587]: time="2026-03-07T01:57:44.001707608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzkkn,Uid:99004e17-b6a7-4fb7-a2b0-86ac5bab0cad,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:44.028375 kubelet[2825]: E0307 01:57:44.025800 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:44.028578 containerd[1587]: time="2026-03-07T01:57:44.027809631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fdmbb,Uid:ec146893-f95d-4382-9335-1194bbe341a8,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:44.196696 containerd[1587]: time="2026-03-07T01:57:44.196546848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:44.197012 containerd[1587]: time="2026-03-07T01:57:44.196929210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:44.197342 containerd[1587]: time="2026-03-07T01:57:44.197048667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:44.198478 containerd[1587]: time="2026-03-07T01:57:44.198104357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:44.258650 containerd[1587]: time="2026-03-07T01:57:44.252494197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:44.258650 containerd[1587]: time="2026-03-07T01:57:44.252548982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:44.258650 containerd[1587]: time="2026-03-07T01:57:44.252567797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:44.258650 containerd[1587]: time="2026-03-07T01:57:44.252807055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:44.438552 containerd[1587]: time="2026-03-07T01:57:44.438500084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzkkn,Uid:99004e17-b6a7-4fb7-a2b0-86ac5bab0cad,Namespace:kube-system,Attempt:0,} returns sandbox id \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\"" Mar 7 01:57:44.452110 kubelet[2825]: E0307 01:57:44.450027 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:44.459694 kubelet[2825]: E0307 01:57:44.459655 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:44.463227 containerd[1587]: time="2026-03-07T01:57:44.462559936Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:57:44.687890 containerd[1587]: time="2026-03-07T01:57:44.687657778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fdmbb,Uid:ec146893-f95d-4382-9335-1194bbe341a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe\"" Mar 7 01:57:44.689732 kubelet[2825]: E0307 01:57:44.689611 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:45.543677 kubelet[2825]: E0307 01:57:45.535620 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:46.729104 kubelet[2825]: E0307 01:57:46.729056 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:03.332896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015556445.mount: Deactivated successfully. Mar 7 01:58:19.362407 containerd[1587]: time="2026-03-07T01:58:19.357830085Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:19.385497 containerd[1587]: time="2026-03-07T01:58:19.385398465Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:58:19.421227 containerd[1587]: time="2026-03-07T01:58:19.419588980Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:19.448410 containerd[1587]: time="2026-03-07T01:58:19.448354120Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 34.985740974s" Mar 7 01:58:19.460468 containerd[1587]: time="2026-03-07T01:58:19.451284373Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:58:19.489227 containerd[1587]: time="2026-03-07T01:58:19.488874648Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:58:19.514448 containerd[1587]: time="2026-03-07T01:58:19.514398807Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:58:19.585609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551411571.mount: Deactivated successfully. Mar 7 01:58:19.643767 containerd[1587]: time="2026-03-07T01:58:19.642456375Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\"" Mar 7 01:58:19.655099 containerd[1587]: time="2026-03-07T01:58:19.654983297Z" level=info msg="StartContainer for \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\"" Mar 7 01:58:19.920422 containerd[1587]: time="2026-03-07T01:58:19.920099144Z" level=info msg="StartContainer for \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\" returns successfully" Mar 7 01:58:20.062716 kubelet[2825]: E0307 01:58:20.057975 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:20.277248 containerd[1587]: time="2026-03-07T01:58:20.276902819Z" level=info msg="shim disconnected" id=645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72 namespace=k8s.io Mar 7 01:58:20.277248 containerd[1587]: time="2026-03-07T01:58:20.276976168Z" level=warning msg="cleaning up after shim disconnected" id=645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72 namespace=k8s.io Mar 7 01:58:20.277248 containerd[1587]: time="2026-03-07T01:58:20.276992769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:58:20.582282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72-rootfs.mount: Deactivated successfully. Mar 7 01:58:21.107598 kubelet[2825]: E0307 01:58:21.089040 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:21.221034 containerd[1587]: time="2026-03-07T01:58:21.220969448Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:58:21.420115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371059937.mount: Deactivated successfully. Mar 7 01:58:21.929463 containerd[1587]: time="2026-03-07T01:58:21.922914613Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\"" Mar 7 01:58:21.932208 containerd[1587]: time="2026-03-07T01:58:21.929983658Z" level=info msg="StartContainer for \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\"" Mar 7 01:58:22.510008 containerd[1587]: time="2026-03-07T01:58:22.509822383Z" level=info msg="StartContainer for \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\" returns successfully" Mar 7 01:58:22.569400 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:58:22.571340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:58:22.577365 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:58:22.631832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:58:22.770832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:58:23.009386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d-rootfs.mount: Deactivated successfully. Mar 7 01:58:23.134266 containerd[1587]: time="2026-03-07T01:58:23.133480769Z" level=info msg="shim disconnected" id=6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d namespace=k8s.io Mar 7 01:58:23.134266 containerd[1587]: time="2026-03-07T01:58:23.133553146Z" level=warning msg="cleaning up after shim disconnected" id=6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d namespace=k8s.io Mar 7 01:58:23.134266 containerd[1587]: time="2026-03-07T01:58:23.133569958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:58:23.188224 kubelet[2825]: E0307 01:58:23.187807 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:23.227243 containerd[1587]: time="2026-03-07T01:58:23.224914031Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:58:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:58:24.211598 kubelet[2825]: E0307 01:58:24.210591 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:24.349268 containerd[1587]: time="2026-03-07T01:58:24.348950000Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:58:24.515483 containerd[1587]: time="2026-03-07T01:58:24.515103036Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\"" Mar 7 01:58:24.524468 containerd[1587]: time="2026-03-07T01:58:24.519311815Z" level=info msg="StartContainer for \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\"" Mar 7 01:58:24.937611 containerd[1587]: time="2026-03-07T01:58:24.937247338Z" level=info msg="StartContainer for \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\" returns successfully" Mar 7 01:58:25.208804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b-rootfs.mount: Deactivated successfully. Mar 7 01:58:25.273430 kubelet[2825]: E0307 01:58:25.269564 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:25.364035 containerd[1587]: time="2026-03-07T01:58:25.361783122Z" level=info msg="shim disconnected" id=d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b namespace=k8s.io Mar 7 01:58:25.364035 containerd[1587]: time="2026-03-07T01:58:25.361844369Z" level=warning msg="cleaning up after shim disconnected" id=d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b namespace=k8s.io Mar 7 01:58:25.364035 containerd[1587]: time="2026-03-07T01:58:25.361855429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:58:26.282583 kubelet[2825]: E0307 01:58:26.280456 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:26.329352 containerd[1587]: time="2026-03-07T01:58:26.328451245Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:58:26.516909 containerd[1587]: time="2026-03-07T01:58:26.512027027Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\"" Mar 7 01:58:26.523880 containerd[1587]: time="2026-03-07T01:58:26.521914171Z" level=info msg="StartContainer for \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\"" Mar 7 01:58:26.838592 containerd[1587]: time="2026-03-07T01:58:26.838553416Z" level=info msg="StartContainer for \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\" returns successfully" Mar 7 01:58:27.022946 containerd[1587]: time="2026-03-07T01:58:27.019258026Z" level=info msg="shim disconnected" id=12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0 namespace=k8s.io Mar 7 01:58:27.022946 containerd[1587]: time="2026-03-07T01:58:27.019339551Z" level=warning msg="cleaning up after shim disconnected" id=12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0 namespace=k8s.io Mar 7 01:58:27.022946 containerd[1587]: time="2026-03-07T01:58:27.019351533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:58:27.068258 containerd[1587]: time="2026-03-07T01:58:27.066964227Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:27.079783 containerd[1587]: time="2026-03-07T01:58:27.079667563Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:58:27.082520 containerd[1587]: time="2026-03-07T01:58:27.082488072Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:27.087867 containerd[1587]: time="2026-03-07T01:58:27.085453752Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.596518831s" Mar 7 01:58:27.087867 containerd[1587]: time="2026-03-07T01:58:27.085552769Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:58:27.128410 containerd[1587]: time="2026-03-07T01:58:27.126906173Z" level=info msg="CreateContainer within sandbox \"e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:58:27.162459 containerd[1587]: time="2026-03-07T01:58:27.161348853Z" level=info msg="CreateContainer within sandbox \"e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\"" Mar 7 01:58:27.164093 containerd[1587]: time="2026-03-07T01:58:27.163357970Z" level=info msg="StartContainer for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\"" Mar 7 01:58:27.342839 kubelet[2825]: E0307 01:58:27.341059 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:27.406006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0-rootfs.mount: Deactivated successfully. Mar 7 01:58:27.427994 containerd[1587]: time="2026-03-07T01:58:27.422591164Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:58:27.526833 containerd[1587]: time="2026-03-07T01:58:27.523458563Z" level=info msg="StartContainer for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" returns successfully" Mar 7 01:58:27.584659 containerd[1587]: time="2026-03-07T01:58:27.582002265Z" level=info msg="CreateContainer within sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\"" Mar 7 01:58:27.592200 containerd[1587]: time="2026-03-07T01:58:27.589405570Z" level=info msg="StartContainer for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\"" Mar 7 01:58:28.246180 containerd[1587]: time="2026-03-07T01:58:28.233855088Z" level=info msg="StartContainer for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" returns successfully" Mar 7 01:58:28.686993 kubelet[2825]: E0307 01:58:28.686200 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:28.862168 kubelet[2825]: I0307 01:58:28.861960 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fdmbb" podStartSLOduration=3.470427824 podStartE2EDuration="45.861934084s" podCreationTimestamp="2026-03-07 01:57:43 +0000 UTC" firstStartedPulling="2026-03-07 01:57:44.698707525 +0000 UTC m=+9.446399643" lastFinishedPulling="2026-03-07 01:58:27.090213785 +0000 UTC m=+51.837905903" observedRunningTime="2026-03-07 01:58:28.834293091 +0000 UTC m=+53.581985248" watchObservedRunningTime="2026-03-07 01:58:28.861934084 +0000 UTC m=+53.609626252" Mar 7 01:58:29.041306 systemd[1]: run-containerd-runc-k8s.io-ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b-runc.ZP6F3Q.mount: Deactivated successfully. Mar 7 01:58:29.709689 kubelet[2825]: E0307 01:58:29.706385 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:30.262540 kubelet[2825]: I0307 01:58:30.244701 2825 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:58:30.684220 kubelet[2825]: I0307 01:58:30.674324 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bc6ff14-5201-4fc1-afa0-c6f7ffd6ee99-config-volume\") pod \"coredns-674b8bbfcf-vrxk6\" (UID: \"0bc6ff14-5201-4fc1-afa0-c6f7ffd6ee99\") " pod="kube-system/coredns-674b8bbfcf-vrxk6" Mar 7 01:58:30.684220 kubelet[2825]: I0307 01:58:30.674427 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f8fa8df-debb-41e7-b0b9-500db6d1635c-config-volume\") pod \"coredns-674b8bbfcf-dxvnr\" (UID: \"5f8fa8df-debb-41e7-b0b9-500db6d1635c\") " pod="kube-system/coredns-674b8bbfcf-dxvnr" Mar 7 01:58:30.684220 kubelet[2825]: I0307 01:58:30.674470 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jw6w\" (UniqueName: \"kubernetes.io/projected/0bc6ff14-5201-4fc1-afa0-c6f7ffd6ee99-kube-api-access-9jw6w\") pod \"coredns-674b8bbfcf-vrxk6\" (UID: \"0bc6ff14-5201-4fc1-afa0-c6f7ffd6ee99\") " pod="kube-system/coredns-674b8bbfcf-vrxk6" Mar 7 01:58:30.684220 kubelet[2825]: I0307 01:58:30.674534 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrlx\" (UniqueName: \"kubernetes.io/projected/5f8fa8df-debb-41e7-b0b9-500db6d1635c-kube-api-access-sdrlx\") pod \"coredns-674b8bbfcf-dxvnr\" (UID: \"5f8fa8df-debb-41e7-b0b9-500db6d1635c\") " pod="kube-system/coredns-674b8bbfcf-dxvnr" Mar 7 01:58:30.734955 kubelet[2825]: E0307 01:58:30.708756 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:31.147568 kubelet[2825]: I0307 01:58:31.146556 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kzkkn" podStartSLOduration=13.134559657 podStartE2EDuration="48.146536342s" podCreationTimestamp="2026-03-07 01:57:43 +0000 UTC" firstStartedPulling="2026-03-07 01:57:44.462039681 +0000 UTC m=+9.209731809" lastFinishedPulling="2026-03-07 01:58:19.474016376 +0000 UTC m=+44.221708494" observedRunningTime="2026-03-07 01:58:31.145994098 +0000 UTC m=+55.893686237" watchObservedRunningTime="2026-03-07 01:58:31.146536342 +0000 UTC m=+55.894228479" Mar 7 01:58:31.205517 kubelet[2825]: E0307 01:58:31.200656 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:31.241526 kubelet[2825]: E0307 01:58:31.241405 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:31.253477 containerd[1587]: time="2026-03-07T01:58:31.252405887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dxvnr,Uid:5f8fa8df-debb-41e7-b0b9-500db6d1635c,Namespace:kube-system,Attempt:0,}" Mar 7 01:58:31.259381 containerd[1587]: time="2026-03-07T01:58:31.256637166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vrxk6,Uid:0bc6ff14-5201-4fc1-afa0-c6f7ffd6ee99,Namespace:kube-system,Attempt:0,}" Mar 7 01:58:32.020183 kubelet[2825]: E0307 01:58:32.007438 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:33.170619 systemd[1]: run-containerd-runc-k8s.io-ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b-runc.VZehow.mount: Deactivated successfully. Mar 7 01:58:35.078369 systemd-networkd[1242]: cilium_host: Link UP Mar 7 01:58:35.083940 systemd-networkd[1242]: cilium_net: Link UP Mar 7 01:58:35.084379 systemd-networkd[1242]: cilium_net: Gained carrier Mar 7 01:58:35.084666 systemd-networkd[1242]: cilium_host: Gained carrier Mar 7 01:58:35.084935 systemd-networkd[1242]: cilium_net: Gained IPv6LL Mar 7 01:58:35.085276 systemd-networkd[1242]: cilium_host: Gained IPv6LL Mar 7 01:58:36.061427 systemd-networkd[1242]: cilium_vxlan: Link UP Mar 7 01:58:36.061719 systemd-networkd[1242]: cilium_vxlan: Gained carrier Mar 7 01:58:37.542541 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Mar 7 01:58:37.881582 kernel: NET: Registered PF_ALG protocol family Mar 7 01:58:41.231284 kubelet[2825]: E0307 01:58:41.229432 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:42.252612 systemd-networkd[1242]: lxc_health: Link UP Mar 7 01:58:42.364250 systemd-networkd[1242]: lxc_health: Gained carrier Mar 7 01:58:43.435311 systemd-networkd[1242]: lxc_health: Gained IPv6LL Mar 7 01:58:43.610213 systemd-networkd[1242]: lxcd90e42a15dae: Link UP Mar 7 01:58:43.768447 kernel: eth0: renamed from tmp72a2f Mar 7 01:58:43.874020 systemd-networkd[1242]: lxc4c8a62391b10: Link UP Mar 7 01:58:44.021219 kubelet[2825]: E0307 01:58:44.019421 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:44.091181 kernel: eth0: renamed from tmp7cba0 Mar 7 01:58:44.158283 systemd-networkd[1242]: lxcd90e42a15dae: Gained carrier Mar 7 01:58:44.165413 systemd-networkd[1242]: lxc4c8a62391b10: Gained carrier Mar 7 01:58:44.399754 systemd[1]: run-containerd-runc-k8s.io-ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b-runc.HJvtbs.mount: Deactivated successfully. Mar 7 01:58:45.020186 kubelet[2825]: E0307 01:58:45.018810 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:45.744383 systemd-networkd[1242]: lxc4c8a62391b10: Gained IPv6LL Mar 7 01:58:46.013370 systemd-networkd[1242]: lxcd90e42a15dae: Gained IPv6LL Mar 7 01:58:46.052010 kubelet[2825]: E0307 01:58:46.043842 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:51.950327 sudo[1779]: pam_unix(sudo:session): session closed for user root Mar 7 01:58:51.966989 sshd[1775]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:51.980857 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:56098.service: Deactivated successfully. Mar 7 01:58:52.014016 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:58:52.015180 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:58:52.034962 systemd-logind[1563]: Removed session 7. Mar 7 01:58:54.267758 kubelet[2825]: E0307 01:58:54.258752 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:55.232194 kubelet[2825]: E0307 01:58:55.230045 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:59.233466 kubelet[2825]: E0307 01:58:59.231233 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:00.164511 containerd[1587]: time="2026-03-07T01:59:00.162092560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:59:00.164511 containerd[1587]: time="2026-03-07T01:59:00.162214890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:59:00.164511 containerd[1587]: time="2026-03-07T01:59:00.162322662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:59:00.164511 containerd[1587]: time="2026-03-07T01:59:00.163068057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:59:00.264089 containerd[1587]: time="2026-03-07T01:59:00.260743749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:59:00.264089 containerd[1587]: time="2026-03-07T01:59:00.263022551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:59:00.264089 containerd[1587]: time="2026-03-07T01:59:00.263977439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:59:00.264854 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:59:00.265341 containerd[1587]: time="2026-03-07T01:59:00.264796723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:59:00.409239 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:59:00.421611 containerd[1587]: time="2026-03-07T01:59:00.418996234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vrxk6,Uid:0bc6ff14-5201-4fc1-afa0-c6f7ffd6ee99,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cba02585dc35db407faad516d39de52ad0734d2befad16f45a38d400c60a60d\"" Mar 7 01:59:00.421747 kubelet[2825]: E0307 01:59:00.421208 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:00.466750 containerd[1587]: time="2026-03-07T01:59:00.465289967Z" level=info msg="CreateContainer within sandbox \"7cba02585dc35db407faad516d39de52ad0734d2befad16f45a38d400c60a60d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:59:00.560513 containerd[1587]: time="2026-03-07T01:59:00.560048674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dxvnr,Uid:5f8fa8df-debb-41e7-b0b9-500db6d1635c,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a2f110fb9b39cbf4483f1ba80cfc5b8a424bdd47bc32f25632f988bc2e5a02\"" Mar 7 01:59:00.568334 kubelet[2825]: E0307 01:59:00.561440 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:00.649018 containerd[1587]: time="2026-03-07T01:59:00.648580863Z" level=info msg="CreateContainer within sandbox \"72a2f110fb9b39cbf4483f1ba80cfc5b8a424bdd47bc32f25632f988bc2e5a02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:59:00.656186 containerd[1587]: time="2026-03-07T01:59:00.653716876Z" level=info msg="CreateContainer within sandbox \"7cba02585dc35db407faad516d39de52ad0734d2befad16f45a38d400c60a60d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d77c41d75a6ff71ca09b5c5485a8fd9bf965932a3a72093d2f8d4ee57c414a2a\"" Mar 7 01:59:00.716731 containerd[1587]: time="2026-03-07T01:59:00.702823476Z" level=info msg="StartContainer for \"d77c41d75a6ff71ca09b5c5485a8fd9bf965932a3a72093d2f8d4ee57c414a2a\"" Mar 7 01:59:00.785701 containerd[1587]: time="2026-03-07T01:59:00.783967034Z" level=info msg="CreateContainer within sandbox \"72a2f110fb9b39cbf4483f1ba80cfc5b8a424bdd47bc32f25632f988bc2e5a02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88188c5cad28806223be5b9bee47af810049d801f04895331901091ac175542f\"" Mar 7 01:59:00.799934 containerd[1587]: time="2026-03-07T01:59:00.789113315Z" level=info msg="StartContainer for \"88188c5cad28806223be5b9bee47af810049d801f04895331901091ac175542f\"" Mar 7 01:59:01.129919 containerd[1587]: time="2026-03-07T01:59:01.123329779Z" level=info msg="StartContainer for \"d77c41d75a6ff71ca09b5c5485a8fd9bf965932a3a72093d2f8d4ee57c414a2a\" returns successfully" Mar 7 01:59:01.276208 kubelet[2825]: E0307 01:59:01.275812 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:01.301734 containerd[1587]: time="2026-03-07T01:59:01.293217489Z" level=info msg="StartContainer for \"88188c5cad28806223be5b9bee47af810049d801f04895331901091ac175542f\" returns successfully" Mar 7 01:59:02.311723 kubelet[2825]: E0307 01:59:02.310332 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:02.311723 kubelet[2825]: E0307 01:59:02.310508 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:02.407945 kubelet[2825]: I0307 01:59:02.407461 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vrxk6" podStartSLOduration=86.407441008 podStartE2EDuration="1m26.407441008s" podCreationTimestamp="2026-03-07 01:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:59:01.424713659 +0000 UTC m=+86.172405787" watchObservedRunningTime="2026-03-07 01:59:02.407441008 +0000 UTC m=+87.155133156" Mar 7 01:59:02.407945 kubelet[2825]: I0307 01:59:02.407567 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dxvnr" podStartSLOduration=86.407560563 podStartE2EDuration="1m26.407560563s" podCreationTimestamp="2026-03-07 01:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:59:02.406718006 +0000 UTC m=+87.154410144" watchObservedRunningTime="2026-03-07 01:59:02.407560563 +0000 UTC m=+87.155252681" Mar 7 01:59:03.313188 kubelet[2825]: E0307 01:59:03.311905 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:03.313188 kubelet[2825]: E0307 01:59:03.312790 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:04.323395 kubelet[2825]: E0307 01:59:04.322241 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:46.243032 kubelet[2825]: E0307 01:59:46.242097 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:47.234435 kubelet[2825]: E0307 01:59:47.232703 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:58.261946 kubelet[2825]: E0307 01:59:58.261351 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:11.251336 kubelet[2825]: E0307 02:00:11.233863 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:20.240497 kubelet[2825]: E0307 02:00:20.232029 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:21.233091 kubelet[2825]: E0307 02:00:21.231727 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:21.233091 kubelet[2825]: E0307 02:00:21.232708 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:24.246806 kubelet[2825]: E0307 02:00:24.246751 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:45.354072 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). Mar 7 02:00:45.640612 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:45.681433 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:45.734641 systemd-logind[1563]: New session 8 of user core. Mar 7 02:00:45.754793 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 02:00:46.428636 sshd[4410]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:46.452933 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:36066.service: Deactivated successfully. Mar 7 02:00:46.477022 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 02:00:46.487592 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Mar 7 02:00:46.501425 systemd-logind[1563]: Removed session 8. Mar 7 02:00:51.490891 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:52778.service - OpenSSH per-connection server daemon (10.0.0.1:52778). Mar 7 02:00:51.715565 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 52778 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:51.723079 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:51.759389 systemd-logind[1563]: New session 9 of user core. Mar 7 02:00:51.775841 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 02:00:52.372858 sshd[4427]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:52.420575 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Mar 7 02:00:52.437761 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:52778.service: Deactivated successfully. Mar 7 02:00:52.469720 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 02:00:52.482963 systemd-logind[1563]: Removed session 9. Mar 7 02:00:57.414859 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:52790.service - OpenSSH per-connection server daemon (10.0.0.1:52790). Mar 7 02:00:57.513053 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 52790 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:57.516590 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:57.562520 systemd-logind[1563]: New session 10 of user core. Mar 7 02:00:57.582018 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:00:58.355331 sshd[4443]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:58.382408 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:52790.service: Deactivated successfully. Mar 7 02:00:58.421898 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:00:58.431873 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:00:58.439950 systemd-logind[1563]: Removed session 10. Mar 7 02:01:03.451717 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:34706.service - OpenSSH per-connection server daemon (10.0.0.1:34706). Mar 7 02:01:03.648487 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 34706 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:03.647833 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:03.679583 systemd-logind[1563]: New session 11 of user core. Mar 7 02:01:03.703030 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:01:04.239869 sshd[4460]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:04.252017 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:34706.service: Deactivated successfully. Mar 7 02:01:04.270097 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:01:04.270675 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:01:04.274590 systemd-logind[1563]: Removed session 11. Mar 7 02:01:09.277684 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:34718.service - OpenSSH per-connection server daemon (10.0.0.1:34718). Mar 7 02:01:09.462785 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 34718 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:09.469743 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:09.525551 systemd-logind[1563]: New session 12 of user core. Mar 7 02:01:09.545239 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:01:09.974512 sshd[4476]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:09.998086 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:34718.service: Deactivated successfully. Mar 7 02:01:10.011216 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:01:10.030221 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:01:10.033532 systemd-logind[1563]: Removed session 12. Mar 7 02:01:12.239637 kubelet[2825]: E0307 02:01:12.238446 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:13.231570 kubelet[2825]: E0307 02:01:13.229569 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:15.003789 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:52854.service - OpenSSH per-connection server daemon (10.0.0.1:52854). Mar 7 02:01:15.132915 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 52854 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:15.129734 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:15.157039 systemd-logind[1563]: New session 13 of user core. Mar 7 02:01:15.175209 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:01:15.557932 sshd[4494]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:15.567578 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:52854.service: Deactivated successfully. Mar 7 02:01:15.581704 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:01:15.585993 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:01:15.590548 systemd-logind[1563]: Removed session 13. Mar 7 02:01:20.602963 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:42844.service - OpenSSH per-connection server daemon (10.0.0.1:42844). Mar 7 02:01:20.883395 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 42844 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:20.889596 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:20.964204 systemd-logind[1563]: New session 14 of user core. Mar 7 02:01:20.985788 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:01:21.678831 sshd[4510]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:21.697340 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:42844.service: Deactivated successfully. Mar 7 02:01:21.727197 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:01:21.733664 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:01:21.749753 systemd-logind[1563]: Removed session 14. Mar 7 02:01:22.252720 kubelet[2825]: E0307 02:01:22.250649 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:24.242625 kubelet[2825]: E0307 02:01:24.240335 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:26.741191 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:42850.service - OpenSSH per-connection server daemon (10.0.0.1:42850). Mar 7 02:01:27.067597 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 42850 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:27.063989 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:27.109718 systemd-logind[1563]: New session 15 of user core. Mar 7 02:01:27.146848 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:01:27.822165 sshd[4526]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:27.835282 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:01:27.839344 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:42850.service: Deactivated successfully. Mar 7 02:01:27.854100 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:01:27.877018 systemd-logind[1563]: Removed session 15. Mar 7 02:01:28.244656 kubelet[2825]: E0307 02:01:28.240275 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:32.867863 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:43950.service - OpenSSH per-connection server daemon (10.0.0.1:43950). Mar 7 02:01:32.969269 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 43950 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:32.978955 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:33.015300 systemd-logind[1563]: New session 16 of user core. Mar 7 02:01:33.038975 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:01:33.504024 sshd[4543]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:33.526956 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:43950.service: Deactivated successfully. Mar 7 02:01:33.546990 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:01:33.557106 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:01:33.567569 systemd-logind[1563]: Removed session 16. Mar 7 02:01:34.238046 kubelet[2825]: E0307 02:01:34.237995 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:35.234240 kubelet[2825]: E0307 02:01:35.231825 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:38.554822 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:43960.service - OpenSSH per-connection server daemon (10.0.0.1:43960). Mar 7 02:01:38.696426 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 43960 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:38.711984 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:38.736356 systemd-logind[1563]: New session 17 of user core. Mar 7 02:01:38.743786 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:01:39.536175 sshd[4565]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:39.571751 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:43960.service: Deactivated successfully. Mar 7 02:01:39.592801 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:01:39.599008 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:01:39.634221 systemd-logind[1563]: Removed session 17. Mar 7 02:01:40.239797 kubelet[2825]: E0307 02:01:40.230709 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:44.600975 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:59084.service - OpenSSH per-connection server daemon (10.0.0.1:59084). Mar 7 02:01:44.828072 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 59084 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:44.853029 sshd[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:44.879905 systemd-logind[1563]: New session 18 of user core. Mar 7 02:01:44.921322 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:01:45.773969 sshd[4583]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:45.828848 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:59084.service: Deactivated successfully. Mar 7 02:01:45.874017 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:01:45.892582 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:01:45.894527 systemd-logind[1563]: Removed session 18. Mar 7 02:01:50.800034 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:55764.service - OpenSSH per-connection server daemon (10.0.0.1:55764). Mar 7 02:01:51.210583 sshd[4599]: Accepted publickey for core from 10.0.0.1 port 55764 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:51.225531 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:51.266263 systemd-logind[1563]: New session 19 of user core. Mar 7 02:01:51.292829 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:01:52.129838 sshd[4599]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:52.161086 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:55764.service: Deactivated successfully. Mar 7 02:01:52.193058 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:01:52.219493 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:01:52.240014 systemd-logind[1563]: Removed session 19. Mar 7 02:01:57.184007 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). Mar 7 02:01:57.376860 sshd[4616]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:57.422616 sshd[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:57.488091 systemd-logind[1563]: New session 20 of user core. Mar 7 02:01:57.523677 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 02:01:58.431432 sshd[4616]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:58.453456 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:55780.service: Deactivated successfully. Mar 7 02:01:58.473058 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Mar 7 02:01:58.484596 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 02:01:58.495031 systemd-logind[1563]: Removed session 20. Mar 7 02:02:03.471487 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:60858.service - OpenSSH per-connection server daemon (10.0.0.1:60858). Mar 7 02:02:03.818504 sshd[4633]: Accepted publickey for core from 10.0.0.1 port 60858 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:03.829799 sshd[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:03.892581 systemd-logind[1563]: New session 21 of user core. Mar 7 02:02:03.920389 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 02:02:05.153754 sshd[4633]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:05.179554 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:60858.service: Deactivated successfully. Mar 7 02:02:05.207754 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Mar 7 02:02:05.208945 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 02:02:05.247572 systemd-logind[1563]: Removed session 21. Mar 7 02:02:10.286814 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:49230.service - OpenSSH per-connection server daemon (10.0.0.1:49230). Mar 7 02:02:10.632016 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 49230 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:10.641174 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:10.679926 systemd-logind[1563]: New session 22 of user core. Mar 7 02:02:10.693953 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 02:02:11.268955 sshd[4650]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:11.308838 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:49246.service - OpenSSH per-connection server daemon (10.0.0.1:49246). Mar 7 02:02:11.313788 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:49230.service: Deactivated successfully. Mar 7 02:02:11.336391 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 02:02:11.365631 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Mar 7 02:02:11.384345 systemd-logind[1563]: Removed session 22. Mar 7 02:02:11.457933 sshd[4665]: Accepted publickey for core from 10.0.0.1 port 49246 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:11.460384 sshd[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:11.494082 systemd-logind[1563]: New session 23 of user core. Mar 7 02:02:11.519463 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 02:02:12.485758 sshd[4665]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:12.504915 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:49246.service: Deactivated successfully. Mar 7 02:02:12.537354 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 02:02:12.538520 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Mar 7 02:02:12.597485 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:49262.service - OpenSSH per-connection server daemon (10.0.0.1:49262). Mar 7 02:02:12.610715 systemd-logind[1563]: Removed session 23. Mar 7 02:02:12.778554 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 49262 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:12.786818 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:12.815265 systemd-logind[1563]: New session 24 of user core. Mar 7 02:02:12.821795 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 02:02:13.281773 sshd[4682]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:13.292022 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:49262.service: Deactivated successfully. Mar 7 02:02:13.317734 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 02:02:13.319828 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Mar 7 02:02:13.326966 systemd-logind[1563]: Removed session 24. Mar 7 02:02:18.338765 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:49278.service - OpenSSH per-connection server daemon (10.0.0.1:49278). Mar 7 02:02:18.463292 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 49278 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:18.470711 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:18.517529 systemd-logind[1563]: New session 25 of user core. Mar 7 02:02:18.543826 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 02:02:18.944743 sshd[4699]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:18.950601 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Mar 7 02:02:18.953310 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:49278.service: Deactivated successfully. Mar 7 02:02:18.964501 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 02:02:18.978020 systemd-logind[1563]: Removed session 25. Mar 7 02:02:23.978540 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:39074.service - OpenSSH per-connection server daemon (10.0.0.1:39074). Mar 7 02:02:24.157528 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 39074 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:24.175661 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:24.214198 systemd-logind[1563]: New session 26 of user core. Mar 7 02:02:24.241811 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 02:02:24.685029 sshd[4715]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:24.732039 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:39074.service: Deactivated successfully. Mar 7 02:02:24.750787 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 02:02:24.756478 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Mar 7 02:02:24.759178 systemd-logind[1563]: Removed session 26. Mar 7 02:02:27.234516 kubelet[2825]: E0307 02:02:27.232659 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:29.760204 systemd[1]: Started sshd@26-10.0.0.132:22-10.0.0.1:39084.service - OpenSSH per-connection server daemon (10.0.0.1:39084). Mar 7 02:02:29.954904 sshd[4730]: Accepted publickey for core from 10.0.0.1 port 39084 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:29.962675 sshd[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:30.016446 systemd-logind[1563]: New session 27 of user core. Mar 7 02:02:30.021008 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 02:02:30.625497 sshd[4730]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:30.665947 systemd[1]: sshd@26-10.0.0.132:22-10.0.0.1:39084.service: Deactivated successfully. Mar 7 02:02:30.676451 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 02:02:30.679706 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Mar 7 02:02:30.688020 systemd-logind[1563]: Removed session 27. Mar 7 02:02:32.237427 kubelet[2825]: E0307 02:02:32.236935 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:33.234252 kubelet[2825]: E0307 02:02:33.233492 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:35.476046 update_engine[1566]: I20260307 02:02:35.475929 1566 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 02:02:35.476046 update_engine[1566]: I20260307 02:02:35.476029 1566 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 02:02:35.482512 update_engine[1566]: I20260307 02:02:35.479586 1566 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 02:02:35.482512 update_engine[1566]: I20260307 02:02:35.480414 1566 omaha_request_params.cc:62] Current group set to lts Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492084 1566 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492194 1566 update_attempter.cc:643] Scheduling an action processor start. Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492231 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492311 1566 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492637 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492658 1566 omaha_request_action.cc:272] Request: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: Mar 7 02:02:35.493177 update_engine[1566]: I20260307 02:02:35.492671 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:02:35.495308 locksmithd[1623]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 02:02:35.511349 update_engine[1566]: I20260307 02:02:35.511185 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:02:35.513085 update_engine[1566]: I20260307 02:02:35.512990 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:02:35.537444 update_engine[1566]: E20260307 02:02:35.535997 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:02:35.537444 update_engine[1566]: I20260307 02:02:35.536437 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 02:02:35.675631 systemd[1]: Started sshd@27-10.0.0.132:22-10.0.0.1:52956.service - OpenSSH per-connection server daemon (10.0.0.1:52956). Mar 7 02:02:35.840905 sshd[4745]: Accepted publickey for core from 10.0.0.1 port 52956 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:35.856335 sshd[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:35.875589 systemd-logind[1563]: New session 28 of user core. Mar 7 02:02:35.882831 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 02:02:36.446353 sshd[4745]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:36.469633 systemd[1]: sshd@27-10.0.0.132:22-10.0.0.1:52956.service: Deactivated successfully. Mar 7 02:02:36.489867 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 02:02:36.493654 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. Mar 7 02:02:36.520305 systemd-logind[1563]: Removed session 28. Mar 7 02:02:37.234967 kubelet[2825]: E0307 02:02:37.233301 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:41.482174 systemd[1]: Started sshd@28-10.0.0.132:22-10.0.0.1:44140.service - OpenSSH per-connection server daemon (10.0.0.1:44140). Mar 7 02:02:41.601666 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 44140 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:41.609715 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:41.650383 systemd-logind[1563]: New session 29 of user core. Mar 7 02:02:41.667737 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 02:02:42.175246 sshd[4764]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:42.194862 systemd[1]: sshd@28-10.0.0.132:22-10.0.0.1:44140.service: Deactivated successfully. Mar 7 02:02:42.213795 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 02:02:42.219995 systemd-logind[1563]: Session 29 logged out. Waiting for processes to exit. Mar 7 02:02:42.244316 kubelet[2825]: E0307 02:02:42.233200 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:42.239633 systemd-logind[1563]: Removed session 29. Mar 7 02:02:45.478421 update_engine[1566]: I20260307 02:02:45.477487 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:02:45.478421 update_engine[1566]: I20260307 02:02:45.477936 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:02:45.478421 update_engine[1566]: I20260307 02:02:45.478307 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:02:45.509197 update_engine[1566]: E20260307 02:02:45.508898 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:02:45.509197 update_engine[1566]: I20260307 02:02:45.509035 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 02:02:47.238335 systemd[1]: Started sshd@29-10.0.0.132:22-10.0.0.1:44144.service - OpenSSH per-connection server daemon (10.0.0.1:44144). Mar 7 02:02:47.393616 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 44144 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:47.418862 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:47.432753 systemd-logind[1563]: New session 30 of user core. Mar 7 02:02:47.453031 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 02:02:47.985013 sshd[4780]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:48.004020 systemd[1]: sshd@29-10.0.0.132:22-10.0.0.1:44144.service: Deactivated successfully. Mar 7 02:02:48.026433 systemd-logind[1563]: Session 30 logged out. Waiting for processes to exit. Mar 7 02:02:48.026960 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 02:02:48.038853 systemd-logind[1563]: Removed session 30. Mar 7 02:02:53.075835 systemd[1]: Started sshd@30-10.0.0.132:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678). Mar 7 02:02:53.264538 sshd[4795]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:53.268449 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:53.292662 systemd-logind[1563]: New session 31 of user core. Mar 7 02:02:53.319422 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 02:02:53.816466 sshd[4795]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:53.840046 systemd[1]: sshd@30-10.0.0.132:22-10.0.0.1:46678.service: Deactivated successfully. Mar 7 02:02:53.867061 systemd-logind[1563]: Session 31 logged out. Waiting for processes to exit. Mar 7 02:02:53.881279 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 02:02:53.897597 systemd-logind[1563]: Removed session 31. Mar 7 02:02:55.237088 kubelet[2825]: E0307 02:02:55.236497 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:55.475780 update_engine[1566]: I20260307 02:02:55.472361 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:02:55.475780 update_engine[1566]: I20260307 02:02:55.472884 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:02:55.475780 update_engine[1566]: I20260307 02:02:55.473440 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:02:55.503746 update_engine[1566]: E20260307 02:02:55.503110 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:02:55.503746 update_engine[1566]: I20260307 02:02:55.503288 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 02:02:58.853285 systemd[1]: Started sshd@31-10.0.0.132:22-10.0.0.1:46694.service - OpenSSH per-connection server daemon (10.0.0.1:46694). Mar 7 02:02:59.042372 sshd[4810]: Accepted publickey for core from 10.0.0.1 port 46694 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:02:59.054987 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:59.147520 systemd-logind[1563]: New session 32 of user core. Mar 7 02:02:59.167781 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 02:02:59.623347 sshd[4810]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:59.641031 systemd[1]: sshd@31-10.0.0.132:22-10.0.0.1:46694.service: Deactivated successfully. Mar 7 02:02:59.662113 systemd-logind[1563]: Session 32 logged out. Waiting for processes to exit. Mar 7 02:02:59.679009 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 02:02:59.710971 systemd-logind[1563]: Removed session 32. Mar 7 02:03:04.667499 systemd[1]: Started sshd@32-10.0.0.132:22-10.0.0.1:46422.service - OpenSSH per-connection server daemon (10.0.0.1:46422). Mar 7 02:03:04.796476 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 46422 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:04.812986 sshd[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:04.851572 systemd-logind[1563]: New session 33 of user core. Mar 7 02:03:04.865972 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 02:03:05.232788 kubelet[2825]: E0307 02:03:05.230282 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:05.232788 kubelet[2825]: E0307 02:03:05.231248 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:05.351209 sshd[4825]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:05.366077 systemd[1]: sshd@32-10.0.0.132:22-10.0.0.1:46422.service: Deactivated successfully. Mar 7 02:03:05.377194 systemd-logind[1563]: Session 33 logged out. Waiting for processes to exit. Mar 7 02:03:05.378626 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 02:03:05.391363 systemd-logind[1563]: Removed session 33. Mar 7 02:03:05.472405 update_engine[1566]: I20260307 02:03:05.470592 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:03:05.472405 update_engine[1566]: I20260307 02:03:05.471022 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:03:05.472405 update_engine[1566]: I20260307 02:03:05.471376 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:03:05.492112 update_engine[1566]: E20260307 02:03:05.491804 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:03:05.492112 update_engine[1566]: I20260307 02:03:05.491959 1566 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:03:05.492112 update_engine[1566]: I20260307 02:03:05.491982 1566 omaha_request_action.cc:617] Omaha request response: Mar 7 02:03:05.492112 update_engine[1566]: E20260307 02:03:05.492094 1566 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492175 1566 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492188 1566 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492198 1566 update_attempter.cc:306] Processing Done. Mar 7 02:03:05.492429 update_engine[1566]: E20260307 02:03:05.492221 1566 update_attempter.cc:619] Update failed. Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492231 1566 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492242 1566 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492254 1566 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492340 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492376 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492389 1566 omaha_request_action.cc:272] Request: Mar 7 02:03:05.492429 update_engine[1566]: Mar 7 02:03:05.492429 update_engine[1566]: Mar 7 02:03:05.492429 update_engine[1566]: Mar 7 02:03:05.492429 update_engine[1566]: Mar 7 02:03:05.492429 update_engine[1566]: Mar 7 02:03:05.492429 update_engine[1566]: Mar 7 02:03:05.492429 update_engine[1566]: I20260307 02:03:05.492404 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:03:05.493023 update_engine[1566]: I20260307 02:03:05.492707 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:03:05.493062 update_engine[1566]: I20260307 02:03:05.493026 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:03:05.493464 locksmithd[1623]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 02:03:05.511703 update_engine[1566]: E20260307 02:03:05.511600 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511737 1566 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511792 1566 omaha_request_action.cc:617] Omaha request response: Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511811 1566 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511823 1566 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511833 1566 update_attempter.cc:306] Processing Done. Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511847 1566 update_attempter.cc:310] Error event sent. Mar 7 02:03:05.511883 update_engine[1566]: I20260307 02:03:05.511864 1566 update_check_scheduler.cc:74] Next update check in 49m45s Mar 7 02:03:05.514875 locksmithd[1623]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 02:03:10.389711 systemd[1]: Started sshd@33-10.0.0.132:22-10.0.0.1:57554.service - OpenSSH per-connection server daemon (10.0.0.1:57554). Mar 7 02:03:10.584893 sshd[4842]: Accepted publickey for core from 10.0.0.1 port 57554 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:10.589024 sshd[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:10.621961 systemd-logind[1563]: New session 34 of user core. Mar 7 02:03:10.629700 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 02:03:11.178454 sshd[4842]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:11.199078 systemd[1]: sshd@33-10.0.0.132:22-10.0.0.1:57554.service: Deactivated successfully. Mar 7 02:03:11.215612 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 02:03:11.218054 systemd-logind[1563]: Session 34 logged out. Waiting for processes to exit. Mar 7 02:03:11.223563 systemd-logind[1563]: Removed session 34. Mar 7 02:03:16.226431 systemd[1]: Started sshd@34-10.0.0.132:22-10.0.0.1:57564.service - OpenSSH per-connection server daemon (10.0.0.1:57564). Mar 7 02:03:16.393786 sshd[4858]: Accepted publickey for core from 10.0.0.1 port 57564 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:16.421769 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:16.454570 systemd-logind[1563]: New session 35 of user core. Mar 7 02:03:16.480634 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 02:03:17.063390 sshd[4858]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:17.080531 systemd[1]: sshd@34-10.0.0.132:22-10.0.0.1:57564.service: Deactivated successfully. Mar 7 02:03:17.102603 systemd-logind[1563]: Session 35 logged out. Waiting for processes to exit. Mar 7 02:03:17.119574 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 02:03:17.129796 systemd-logind[1563]: Removed session 35. Mar 7 02:03:22.091877 systemd[1]: Started sshd@35-10.0.0.132:22-10.0.0.1:38324.service - OpenSSH per-connection server daemon (10.0.0.1:38324). Mar 7 02:03:22.247729 sshd[4874]: Accepted publickey for core from 10.0.0.1 port 38324 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:22.255673 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:22.298183 systemd-logind[1563]: New session 36 of user core. Mar 7 02:03:22.338910 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 02:03:23.188333 sshd[4874]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:23.223945 systemd[1]: sshd@35-10.0.0.132:22-10.0.0.1:38324.service: Deactivated successfully. Mar 7 02:03:23.236581 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 02:03:23.241584 systemd-logind[1563]: Session 36 logged out. Waiting for processes to exit. Mar 7 02:03:23.245697 systemd-logind[1563]: Removed session 36. Mar 7 02:03:28.462553 systemd[1]: Started sshd@36-10.0.0.132:22-10.0.0.1:38328.service - OpenSSH per-connection server daemon (10.0.0.1:38328). Mar 7 02:03:28.673916 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 38328 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:28.672810 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:28.691008 systemd-logind[1563]: New session 37 of user core. Mar 7 02:03:28.733543 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 02:03:29.531075 sshd[4890]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:29.542777 systemd[1]: sshd@36-10.0.0.132:22-10.0.0.1:38328.service: Deactivated successfully. Mar 7 02:03:29.555565 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 02:03:29.558545 systemd-logind[1563]: Session 37 logged out. Waiting for processes to exit. Mar 7 02:03:29.573261 systemd-logind[1563]: Removed session 37. Mar 7 02:03:34.895649 systemd[1]: Started sshd@37-10.0.0.132:22-10.0.0.1:54148.service - OpenSSH per-connection server daemon (10.0.0.1:54148). Mar 7 02:03:35.444389 sshd[4906]: Accepted publickey for core from 10.0.0.1 port 54148 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:35.453963 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:35.495934 systemd-logind[1563]: New session 38 of user core. Mar 7 02:03:35.545748 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 02:03:36.853860 sshd[4906]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:36.861506 systemd[1]: sshd@37-10.0.0.132:22-10.0.0.1:54148.service: Deactivated successfully. Mar 7 02:03:36.870665 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 02:03:36.872938 systemd-logind[1563]: Session 38 logged out. Waiting for processes to exit. Mar 7 02:03:36.876582 systemd-logind[1563]: Removed session 38. Mar 7 02:03:40.234527 kubelet[2825]: E0307 02:03:40.234384 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:41.927037 systemd[1]: Started sshd@38-10.0.0.132:22-10.0.0.1:36514.service - OpenSSH per-connection server daemon (10.0.0.1:36514). Mar 7 02:03:42.193423 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 36514 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:42.199371 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:42.234585 kubelet[2825]: E0307 02:03:42.234548 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:42.253576 systemd-logind[1563]: New session 39 of user core. Mar 7 02:03:42.266685 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 02:03:42.973420 sshd[4928]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:42.995864 systemd[1]: sshd@38-10.0.0.132:22-10.0.0.1:36514.service: Deactivated successfully. Mar 7 02:03:43.003403 systemd-logind[1563]: Session 39 logged out. Waiting for processes to exit. Mar 7 02:03:43.004702 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 02:03:43.018390 systemd-logind[1563]: Removed session 39. Mar 7 02:03:48.047026 systemd[1]: Started sshd@39-10.0.0.132:22-10.0.0.1:36524.service - OpenSSH per-connection server daemon (10.0.0.1:36524). Mar 7 02:03:48.172388 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 36524 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:48.178387 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:48.222232 systemd-logind[1563]: New session 40 of user core. Mar 7 02:03:48.241177 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 02:03:48.823060 sshd[4944]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:48.835555 systemd[1]: sshd@39-10.0.0.132:22-10.0.0.1:36524.service: Deactivated successfully. Mar 7 02:03:48.869636 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 02:03:48.871875 systemd-logind[1563]: Session 40 logged out. Waiting for processes to exit. Mar 7 02:03:48.877559 systemd-logind[1563]: Removed session 40. Mar 7 02:03:53.885831 systemd[1]: Started sshd@40-10.0.0.132:22-10.0.0.1:49034.service - OpenSSH per-connection server daemon (10.0.0.1:49034). Mar 7 02:03:54.065380 sshd[4961]: Accepted publickey for core from 10.0.0.1 port 49034 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:03:54.069863 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:54.287569 systemd-logind[1563]: New session 41 of user core. Mar 7 02:03:54.418589 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 02:03:55.186647 sshd[4961]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:55.198377 systemd[1]: sshd@40-10.0.0.132:22-10.0.0.1:49034.service: Deactivated successfully. Mar 7 02:03:55.225312 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 02:03:55.226635 systemd-logind[1563]: Session 41 logged out. Waiting for processes to exit. Mar 7 02:03:55.228417 systemd-logind[1563]: Removed session 41. Mar 7 02:03:56.231233 kubelet[2825]: E0307 02:03:56.230782 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:00.217111 systemd[1]: Started sshd@41-10.0.0.132:22-10.0.0.1:35802.service - OpenSSH per-connection server daemon (10.0.0.1:35802). Mar 7 02:04:00.409275 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 35802 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:00.418270 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:00.461034 systemd-logind[1563]: New session 42 of user core. Mar 7 02:04:00.476035 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 02:04:01.093317 sshd[4981]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:01.154264 systemd[1]: Started sshd@42-10.0.0.132:22-10.0.0.1:35814.service - OpenSSH per-connection server daemon (10.0.0.1:35814). Mar 7 02:04:01.155094 systemd[1]: sshd@41-10.0.0.132:22-10.0.0.1:35802.service: Deactivated successfully. Mar 7 02:04:01.187811 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 02:04:01.194220 systemd-logind[1563]: Session 42 logged out. Waiting for processes to exit. Mar 7 02:04:01.210396 systemd-logind[1563]: Removed session 42. Mar 7 02:04:01.280472 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 35814 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:01.288532 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:01.327103 systemd-logind[1563]: New session 43 of user core. Mar 7 02:04:01.338745 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 02:04:02.237060 kubelet[2825]: E0307 02:04:02.233825 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:02.969487 sshd[4993]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:03.013592 systemd[1]: Started sshd@43-10.0.0.132:22-10.0.0.1:35818.service - OpenSSH per-connection server daemon (10.0.0.1:35818). Mar 7 02:04:03.025046 systemd[1]: sshd@42-10.0.0.132:22-10.0.0.1:35814.service: Deactivated successfully. Mar 7 02:04:03.036431 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 02:04:03.048491 systemd-logind[1563]: Session 43 logged out. Waiting for processes to exit. Mar 7 02:04:03.067546 systemd-logind[1563]: Removed session 43. Mar 7 02:04:03.136579 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 35818 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:03.140115 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:03.179987 systemd-logind[1563]: New session 44 of user core. Mar 7 02:04:03.197743 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 02:04:04.267640 kubelet[2825]: E0307 02:04:04.248868 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:06.181762 sshd[5009]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:06.237217 systemd[1]: Started sshd@44-10.0.0.132:22-10.0.0.1:35836.service - OpenSSH per-connection server daemon (10.0.0.1:35836). Mar 7 02:04:06.237989 systemd[1]: sshd@43-10.0.0.132:22-10.0.0.1:35818.service: Deactivated successfully. Mar 7 02:04:06.262409 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 02:04:06.268792 systemd-logind[1563]: Session 44 logged out. Waiting for processes to exit. Mar 7 02:04:06.276272 systemd-logind[1563]: Removed session 44. Mar 7 02:04:06.469210 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 35836 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:06.473938 sshd[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:06.519371 systemd-logind[1563]: New session 45 of user core. Mar 7 02:04:06.521939 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 02:04:07.820788 sshd[5032]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:07.851647 systemd[1]: Started sshd@45-10.0.0.132:22-10.0.0.1:35862.service - OpenSSH per-connection server daemon (10.0.0.1:35862). Mar 7 02:04:07.860969 systemd[1]: sshd@44-10.0.0.132:22-10.0.0.1:35836.service: Deactivated successfully. Mar 7 02:04:07.878539 systemd-logind[1563]: Session 45 logged out. Waiting for processes to exit. Mar 7 02:04:07.884485 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 02:04:07.889622 systemd-logind[1563]: Removed session 45. Mar 7 02:04:08.020042 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 35862 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:08.029080 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:08.059462 systemd-logind[1563]: New session 46 of user core. Mar 7 02:04:08.085773 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 02:04:08.569473 sshd[5051]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:08.595919 systemd[1]: sshd@45-10.0.0.132:22-10.0.0.1:35862.service: Deactivated successfully. Mar 7 02:04:08.612667 systemd-logind[1563]: Session 46 logged out. Waiting for processes to exit. Mar 7 02:04:08.615975 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 02:04:08.621234 systemd-logind[1563]: Removed session 46. Mar 7 02:04:10.236716 kubelet[2825]: E0307 02:04:10.236682 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:12.242802 kubelet[2825]: E0307 02:04:12.239540 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:13.234728 kubelet[2825]: E0307 02:04:13.230575 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:13.635671 systemd[1]: Started sshd@46-10.0.0.132:22-10.0.0.1:41786.service - OpenSSH per-connection server daemon (10.0.0.1:41786). Mar 7 02:04:13.892440 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 41786 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:13.892212 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:13.953486 systemd-logind[1563]: New session 47 of user core. Mar 7 02:04:13.977831 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 02:04:14.431112 sshd[5071]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:14.441002 systemd[1]: sshd@46-10.0.0.132:22-10.0.0.1:41786.service: Deactivated successfully. Mar 7 02:04:14.461412 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 02:04:14.489495 systemd-logind[1563]: Session 47 logged out. Waiting for processes to exit. Mar 7 02:04:14.497995 systemd-logind[1563]: Removed session 47. Mar 7 02:04:19.463943 systemd[1]: Started sshd@47-10.0.0.132:22-10.0.0.1:41816.service - OpenSSH per-connection server daemon (10.0.0.1:41816). Mar 7 02:04:19.613315 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 41816 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:19.617025 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:19.636262 systemd-logind[1563]: New session 48 of user core. Mar 7 02:04:19.649657 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 02:04:20.204883 sshd[5086]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:20.217825 systemd[1]: sshd@47-10.0.0.132:22-10.0.0.1:41816.service: Deactivated successfully. Mar 7 02:04:20.245573 systemd-logind[1563]: Session 48 logged out. Waiting for processes to exit. Mar 7 02:04:20.253383 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 02:04:20.257453 systemd-logind[1563]: Removed session 48. Mar 7 02:04:25.232547 systemd[1]: Started sshd@48-10.0.0.132:22-10.0.0.1:52260.service - OpenSSH per-connection server daemon (10.0.0.1:52260). Mar 7 02:04:25.306861 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 52260 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:25.308812 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:25.333792 systemd-logind[1563]: New session 49 of user core. Mar 7 02:04:25.346172 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 02:04:25.768402 sshd[5101]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:25.780343 systemd[1]: sshd@48-10.0.0.132:22-10.0.0.1:52260.service: Deactivated successfully. Mar 7 02:04:25.794274 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 02:04:25.808424 systemd-logind[1563]: Session 49 logged out. Waiting for processes to exit. Mar 7 02:04:25.817443 systemd-logind[1563]: Removed session 49. Mar 7 02:04:30.822977 systemd[1]: Started sshd@49-10.0.0.132:22-10.0.0.1:41444.service - OpenSSH per-connection server daemon (10.0.0.1:41444). Mar 7 02:04:31.003398 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:31.018351 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:31.053073 systemd-logind[1563]: New session 50 of user core. Mar 7 02:04:31.065804 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 02:04:31.671960 sshd[5118]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:31.702111 systemd[1]: sshd@49-10.0.0.132:22-10.0.0.1:41444.service: Deactivated successfully. Mar 7 02:04:31.718460 systemd-logind[1563]: Session 50 logged out. Waiting for processes to exit. Mar 7 02:04:31.727385 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 02:04:31.730420 systemd-logind[1563]: Removed session 50. Mar 7 02:04:36.689169 systemd[1]: Started sshd@50-10.0.0.132:22-10.0.0.1:41476.service - OpenSSH per-connection server daemon (10.0.0.1:41476). Mar 7 02:04:36.825719 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 41476 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:36.829885 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:36.859224 systemd-logind[1563]: New session 51 of user core. Mar 7 02:04:36.871558 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 02:04:37.237447 sshd[5136]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:37.248663 systemd[1]: sshd@50-10.0.0.132:22-10.0.0.1:41476.service: Deactivated successfully. Mar 7 02:04:37.258866 systemd-logind[1563]: Session 51 logged out. Waiting for processes to exit. Mar 7 02:04:37.263169 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 02:04:37.274841 systemd-logind[1563]: Removed session 51. Mar 7 02:04:42.272279 systemd[1]: Started sshd@51-10.0.0.132:22-10.0.0.1:39762.service - OpenSSH per-connection server daemon (10.0.0.1:39762). Mar 7 02:04:42.394844 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 39762 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:42.399651 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:42.444118 systemd-logind[1563]: New session 52 of user core. Mar 7 02:04:42.450638 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 7 02:04:42.714662 sshd[5155]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:42.723342 systemd[1]: sshd@51-10.0.0.132:22-10.0.0.1:39762.service: Deactivated successfully. Mar 7 02:04:42.729769 systemd-logind[1563]: Session 52 logged out. Waiting for processes to exit. Mar 7 02:04:42.730607 systemd[1]: session-52.scope: Deactivated successfully. Mar 7 02:04:42.734610 systemd-logind[1563]: Removed session 52. Mar 7 02:04:47.779460 systemd[1]: Started sshd@52-10.0.0.132:22-10.0.0.1:39764.service - OpenSSH per-connection server daemon (10.0.0.1:39764). Mar 7 02:04:47.930445 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 39764 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:47.946554 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:47.996624 systemd-logind[1563]: New session 53 of user core. Mar 7 02:04:48.011505 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 7 02:04:48.672529 sshd[5170]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:48.687427 systemd[1]: sshd@52-10.0.0.132:22-10.0.0.1:39764.service: Deactivated successfully. Mar 7 02:04:48.718955 systemd[1]: session-53.scope: Deactivated successfully. Mar 7 02:04:48.738722 systemd-logind[1563]: Session 53 logged out. Waiting for processes to exit. Mar 7 02:04:48.741008 systemd-logind[1563]: Removed session 53. Mar 7 02:04:50.231711 kubelet[2825]: E0307 02:04:50.231212 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:53.693031 systemd[1]: Started sshd@53-10.0.0.132:22-10.0.0.1:48960.service - OpenSSH per-connection server daemon (10.0.0.1:48960). Mar 7 02:04:53.878332 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 48960 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:53.881351 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:53.907817 systemd-logind[1563]: New session 54 of user core. Mar 7 02:04:53.918930 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 7 02:04:54.353286 sshd[5185]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:54.373183 systemd[1]: Started sshd@54-10.0.0.132:22-10.0.0.1:48966.service - OpenSSH per-connection server daemon (10.0.0.1:48966). Mar 7 02:04:54.376781 systemd[1]: sshd@53-10.0.0.132:22-10.0.0.1:48960.service: Deactivated successfully. Mar 7 02:04:54.393993 systemd-logind[1563]: Session 54 logged out. Waiting for processes to exit. Mar 7 02:04:54.403874 systemd[1]: session-54.scope: Deactivated successfully. Mar 7 02:04:54.423994 systemd-logind[1563]: Removed session 54. Mar 7 02:04:54.507975 sshd[5197]: Accepted publickey for core from 10.0.0.1 port 48966 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:54.525249 sshd[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:54.550105 systemd-logind[1563]: New session 55 of user core. Mar 7 02:04:54.565819 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 7 02:04:57.351622 containerd[1587]: time="2026-03-07T02:04:57.347088727Z" level=info msg="StopContainer for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" with timeout 30 (s)" Mar 7 02:04:57.364032 containerd[1587]: time="2026-03-07T02:04:57.363986055Z" level=info msg="Stop container \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" with signal terminated" Mar 7 02:04:57.442303 systemd[1]: run-containerd-runc-k8s.io-ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b-runc.H1f4Rk.mount: Deactivated successfully. Mar 7 02:04:57.482822 containerd[1587]: time="2026-03-07T02:04:57.482725397Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:04:57.499238 containerd[1587]: time="2026-03-07T02:04:57.499196027Z" level=info msg="StopContainer for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" with timeout 2 (s)" Mar 7 02:04:57.500399 containerd[1587]: time="2026-03-07T02:04:57.500337143Z" level=info msg="Stop container \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" with signal terminated" Mar 7 02:04:57.511254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541-rootfs.mount: Deactivated successfully. Mar 7 02:04:57.548522 systemd-networkd[1242]: lxc_health: Link DOWN Mar 7 02:04:57.548531 systemd-networkd[1242]: lxc_health: Lost carrier Mar 7 02:04:57.573268 containerd[1587]: time="2026-03-07T02:04:57.570526772Z" level=info msg="shim disconnected" id=302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541 namespace=k8s.io Mar 7 02:04:57.573268 containerd[1587]: time="2026-03-07T02:04:57.570601390Z" level=warning msg="cleaning up after shim disconnected" id=302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541 namespace=k8s.io Mar 7 02:04:57.573268 containerd[1587]: time="2026-03-07T02:04:57.570616398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:57.666366 containerd[1587]: time="2026-03-07T02:04:57.666212547Z" level=info msg="StopContainer for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" returns successfully" Mar 7 02:04:57.679890 containerd[1587]: time="2026-03-07T02:04:57.676995627Z" level=info msg="StopPodSandbox for \"e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe\"" Mar 7 02:04:57.679890 containerd[1587]: time="2026-03-07T02:04:57.677055017Z" level=info msg="Container to stop \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:57.692796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe-shm.mount: Deactivated successfully. Mar 7 02:04:57.729096 containerd[1587]: time="2026-03-07T02:04:57.727898945Z" level=info msg="shim disconnected" id=ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b namespace=k8s.io Mar 7 02:04:57.729096 containerd[1587]: time="2026-03-07T02:04:57.727964097Z" level=warning msg="cleaning up after shim disconnected" id=ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b namespace=k8s.io Mar 7 02:04:57.729096 containerd[1587]: time="2026-03-07T02:04:57.727977011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:57.787274 containerd[1587]: time="2026-03-07T02:04:57.784806400Z" level=warning msg="cleanup warnings time=\"2026-03-07T02:04:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 02:04:57.798329 containerd[1587]: time="2026-03-07T02:04:57.798119501Z" level=info msg="StopContainer for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" returns successfully" Mar 7 02:04:57.800628 containerd[1587]: time="2026-03-07T02:04:57.799287830Z" level=info msg="StopPodSandbox for \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\"" Mar 7 02:04:57.800628 containerd[1587]: time="2026-03-07T02:04:57.799333074Z" level=info msg="Container to stop \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:57.800628 containerd[1587]: time="2026-03-07T02:04:57.799354695Z" level=info msg="Container to stop \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:57.800628 containerd[1587]: time="2026-03-07T02:04:57.799370985Z" level=info msg="Container to stop \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:57.800628 containerd[1587]: time="2026-03-07T02:04:57.799385772Z" level=info msg="Container to stop \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:57.800628 containerd[1587]: time="2026-03-07T02:04:57.799401552Z" level=info msg="Container to stop \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:57.885928 containerd[1587]: time="2026-03-07T02:04:57.885027639Z" level=info msg="shim disconnected" id=e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe namespace=k8s.io Mar 7 02:04:57.885928 containerd[1587]: time="2026-03-07T02:04:57.885097890Z" level=warning msg="cleaning up after shim disconnected" id=e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe namespace=k8s.io Mar 7 02:04:57.885928 containerd[1587]: time="2026-03-07T02:04:57.885110774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:57.936185 containerd[1587]: time="2026-03-07T02:04:57.926458554Z" level=info msg="TearDown network for sandbox \"e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe\" successfully" Mar 7 02:04:57.936185 containerd[1587]: time="2026-03-07T02:04:57.926621047Z" level=info msg="StopPodSandbox for \"e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe\" returns successfully" Mar 7 02:04:58.015076 containerd[1587]: time="2026-03-07T02:04:58.014256394Z" level=info msg="shim disconnected" id=9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8 namespace=k8s.io Mar 7 02:04:58.015076 containerd[1587]: time="2026-03-07T02:04:58.014323408Z" level=warning msg="cleaning up after shim disconnected" id=9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8 namespace=k8s.io Mar 7 02:04:58.015076 containerd[1587]: time="2026-03-07T02:04:58.014336102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:58.049819 kubelet[2825]: I0307 02:04:58.047738 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec146893-f95d-4382-9335-1194bbe341a8-cilium-config-path\") pod \"ec146893-f95d-4382-9335-1194bbe341a8\" (UID: \"ec146893-f95d-4382-9335-1194bbe341a8\") " Mar 7 02:04:58.049819 kubelet[2825]: I0307 02:04:58.047814 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvrpz\" (UniqueName: \"kubernetes.io/projected/ec146893-f95d-4382-9335-1194bbe341a8-kube-api-access-dvrpz\") pod \"ec146893-f95d-4382-9335-1194bbe341a8\" (UID: \"ec146893-f95d-4382-9335-1194bbe341a8\") " Mar 7 02:04:58.070520 kubelet[2825]: I0307 02:04:58.064530 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec146893-f95d-4382-9335-1194bbe341a8-kube-api-access-dvrpz" (OuterVolumeSpecName: "kube-api-access-dvrpz") pod "ec146893-f95d-4382-9335-1194bbe341a8" (UID: "ec146893-f95d-4382-9335-1194bbe341a8"). InnerVolumeSpecName "kube-api-access-dvrpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:04:58.070520 kubelet[2825]: I0307 02:04:58.066377 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec146893-f95d-4382-9335-1194bbe341a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec146893-f95d-4382-9335-1194bbe341a8" (UID: "ec146893-f95d-4382-9335-1194bbe341a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:04:58.077928 containerd[1587]: time="2026-03-07T02:04:58.077628844Z" level=info msg="TearDown network for sandbox \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" successfully" Mar 7 02:04:58.077928 containerd[1587]: time="2026-03-07T02:04:58.077705547Z" level=info msg="StopPodSandbox for \"9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8\" returns successfully" Mar 7 02:04:58.149227 kubelet[2825]: I0307 02:04:58.149030 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-clustermesh-secrets\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.149227 kubelet[2825]: I0307 02:04:58.149192 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-etc-cni-netd\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.149227 kubelet[2825]: I0307 02:04:58.149218 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hostproc\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.149227 kubelet[2825]: I0307 02:04:58.149239 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-bpf-maps\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.149547 kubelet[2825]: I0307 02:04:58.149262 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.149547 kubelet[2825]: I0307 02:04:58.149318 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hostproc" (OuterVolumeSpecName: "hostproc") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.149622 kubelet[2825]: I0307 02:04:58.149573 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hubble-tls\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.149622 kubelet[2825]: I0307 02:04:58.149603 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cni-path\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.149753 kubelet[2825]: I0307 02:04:58.149626 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-net\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150166 kubelet[2825]: I0307 02:04:58.149654 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-lib-modules\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150166 kubelet[2825]: I0307 02:04:58.149950 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-cgroup\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150166 kubelet[2825]: I0307 02:04:58.149972 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-xtables-lock\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150166 kubelet[2825]: I0307 02:04:58.149994 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-kernel\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150166 kubelet[2825]: I0307 02:04:58.150015 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-run\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150166 kubelet[2825]: I0307 02:04:58.150037 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-config-path\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150410 kubelet[2825]: I0307 02:04:58.150279 2825 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b976r\" (UniqueName: \"kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-kube-api-access-b976r\") pod \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\" (UID: \"99004e17-b6a7-4fb7-a2b0-86ac5bab0cad\") " Mar 7 02:04:58.150410 kubelet[2825]: I0307 02:04:58.150349 2825 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dvrpz\" (UniqueName: \"kubernetes.io/projected/ec146893-f95d-4382-9335-1194bbe341a8-kube-api-access-dvrpz\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.150410 kubelet[2825]: I0307 02:04:58.150369 2825 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.150410 kubelet[2825]: I0307 02:04:58.150382 2825 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.150410 kubelet[2825]: I0307 02:04:58.150398 2825 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec146893-f95d-4382-9335-1194bbe341a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.150594 kubelet[2825]: I0307 02:04:58.150568 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150635 kubelet[2825]: I0307 02:04:58.150599 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cni-path" (OuterVolumeSpecName: "cni-path") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150635 kubelet[2825]: I0307 02:04:58.150622 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150831 kubelet[2825]: I0307 02:04:58.150644 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150831 kubelet[2825]: I0307 02:04:58.150708 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150831 kubelet[2825]: I0307 02:04:58.150772 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150831 kubelet[2825]: I0307 02:04:58.150797 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.150831 kubelet[2825]: I0307 02:04:58.150819 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:58.156907 kubelet[2825]: I0307 02:04:58.156865 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-kube-api-access-b976r" (OuterVolumeSpecName: "kube-api-access-b976r") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "kube-api-access-b976r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:04:58.170808 kubelet[2825]: I0307 02:04:58.169742 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:04:58.170808 kubelet[2825]: I0307 02:04:58.170741 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:04:58.171992 kubelet[2825]: I0307 02:04:58.171874 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" (UID: "99004e17-b6a7-4fb7-a2b0-86ac5bab0cad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250881 2825 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b976r\" (UniqueName: \"kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-kube-api-access-b976r\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250920 2825 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250937 2825 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250952 2825 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250968 2825 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250983 2825 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.250996 2825 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.252543 kubelet[2825]: I0307 02:04:58.251009 2825 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.253014 kubelet[2825]: I0307 02:04:58.251022 2825 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.253014 kubelet[2825]: I0307 02:04:58.251035 2825 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.253014 kubelet[2825]: I0307 02:04:58.251049 2825 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.253014 kubelet[2825]: I0307 02:04:58.251063 2825 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:58.421247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b-rootfs.mount: Deactivated successfully. Mar 7 02:04:58.421479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8c496290f1eee51aca8c45393cbbd47eacedf10dbddfe8cb6db3306784316fe-rootfs.mount: Deactivated successfully. Mar 7 02:04:58.422006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8-rootfs.mount: Deactivated successfully. Mar 7 02:04:58.422255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9112408c8356e7d7ff9aec29a50317c036d2f55e988ced369f2342f3a0fcfbb8-shm.mount: Deactivated successfully. Mar 7 02:04:58.422417 systemd[1]: var-lib-kubelet-pods-ec146893\x2df95d\x2d4382\x2d9335\x2d1194bbe341a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddvrpz.mount: Deactivated successfully. Mar 7 02:04:58.422590 systemd[1]: var-lib-kubelet-pods-99004e17\x2db6a7\x2d4fb7\x2da2b0\x2d86ac5bab0cad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 02:04:58.422792 systemd[1]: var-lib-kubelet-pods-99004e17\x2db6a7\x2d4fb7\x2da2b0\x2d86ac5bab0cad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 02:04:58.422950 systemd[1]: var-lib-kubelet-pods-99004e17\x2db6a7\x2d4fb7\x2da2b0\x2d86ac5bab0cad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db976r.mount: Deactivated successfully. Mar 7 02:04:58.444762 kubelet[2825]: I0307 02:04:58.444545 2825 scope.go:117] "RemoveContainer" containerID="302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541" Mar 7 02:04:58.453472 containerd[1587]: time="2026-03-07T02:04:58.449030328Z" level=info msg="RemoveContainer for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\"" Mar 7 02:04:58.475920 containerd[1587]: time="2026-03-07T02:04:58.474221944Z" level=info msg="RemoveContainer for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" returns successfully" Mar 7 02:04:58.475920 containerd[1587]: time="2026-03-07T02:04:58.474847110Z" level=error msg="ContainerStatus for \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\": not found" Mar 7 02:04:58.476170 kubelet[2825]: I0307 02:04:58.474560 2825 scope.go:117] "RemoveContainer" containerID="302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541" Mar 7 02:04:58.476170 kubelet[2825]: E0307 02:04:58.474963 2825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\": not found" containerID="302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541" Mar 7 02:04:58.476170 kubelet[2825]: I0307 02:04:58.474991 2825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541"} err="failed to get container status \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\": rpc error: code = NotFound desc = an error occurred when try to find container \"302ce595ccbb671220c6242a5391341750469557b2bc0fbe1050d293475ee541\": not found" Mar 7 02:04:58.476170 kubelet[2825]: I0307 02:04:58.475025 2825 scope.go:117] "RemoveContainer" containerID="ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b" Mar 7 02:04:58.484842 containerd[1587]: time="2026-03-07T02:04:58.484462244Z" level=info msg="RemoveContainer for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\"" Mar 7 02:04:58.508545 containerd[1587]: time="2026-03-07T02:04:58.506972247Z" level=info msg="RemoveContainer for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" returns successfully" Mar 7 02:04:58.508702 kubelet[2825]: I0307 02:04:58.507284 2825 scope.go:117] "RemoveContainer" containerID="12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0" Mar 7 02:04:58.522789 containerd[1587]: time="2026-03-07T02:04:58.520513941Z" level=info msg="RemoveContainer for \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\"" Mar 7 02:04:58.546824 containerd[1587]: time="2026-03-07T02:04:58.545634679Z" level=info msg="RemoveContainer for \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\" returns successfully" Mar 7 02:04:58.546997 kubelet[2825]: I0307 02:04:58.546368 2825 scope.go:117] "RemoveContainer" containerID="d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b" Mar 7 02:04:58.554439 containerd[1587]: time="2026-03-07T02:04:58.553764714Z" level=info msg="RemoveContainer for \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\"" Mar 7 02:04:58.609586 containerd[1587]: time="2026-03-07T02:04:58.606298357Z" level=info msg="RemoveContainer for \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\" returns successfully" Mar 7 02:04:58.612827 kubelet[2825]: I0307 02:04:58.610204 2825 scope.go:117] "RemoveContainer" containerID="6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d" Mar 7 02:04:58.618047 containerd[1587]: time="2026-03-07T02:04:58.614695220Z" level=info msg="RemoveContainer for \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\"" Mar 7 02:04:58.628159 containerd[1587]: time="2026-03-07T02:04:58.627988992Z" level=info msg="RemoveContainer for \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\" returns successfully" Mar 7 02:04:58.628459 kubelet[2825]: I0307 02:04:58.628396 2825 scope.go:117] "RemoveContainer" containerID="645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72" Mar 7 02:04:58.631818 containerd[1587]: time="2026-03-07T02:04:58.631651948Z" level=info msg="RemoveContainer for \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\"" Mar 7 02:04:58.642289 containerd[1587]: time="2026-03-07T02:04:58.642096112Z" level=info msg="RemoveContainer for \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\" returns successfully" Mar 7 02:04:58.642795 kubelet[2825]: I0307 02:04:58.642655 2825 scope.go:117] "RemoveContainer" containerID="ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b" Mar 7 02:04:58.646431 containerd[1587]: time="2026-03-07T02:04:58.643985374Z" level=error msg="ContainerStatus for \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\": not found" Mar 7 02:04:58.646431 containerd[1587]: time="2026-03-07T02:04:58.644589229Z" level=error msg="ContainerStatus for \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\": not found" Mar 7 02:04:58.646431 containerd[1587]: time="2026-03-07T02:04:58.645105311Z" level=error msg="ContainerStatus for \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\": not found" Mar 7 02:04:58.646431 containerd[1587]: time="2026-03-07T02:04:58.645461967Z" level=error msg="ContainerStatus for \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\": not found" Mar 7 02:04:58.646431 containerd[1587]: time="2026-03-07T02:04:58.645948783Z" level=error msg="ContainerStatus for \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\": not found" Mar 7 02:04:58.646790 kubelet[2825]: E0307 02:04:58.644260 2825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\": not found" containerID="ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b" Mar 7 02:04:58.646790 kubelet[2825]: I0307 02:04:58.644308 2825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b"} err="failed to get container status \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce371f8a1a6f5cb80414dbc42071ddfdef8eb8fd9baabd5fe7321c8ee7e5606b\": not found" Mar 7 02:04:58.646790 kubelet[2825]: I0307 02:04:58.644342 2825 scope.go:117] "RemoveContainer" containerID="12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0" Mar 7 02:04:58.646790 kubelet[2825]: E0307 02:04:58.644904 2825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\": not found" containerID="12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0" Mar 7 02:04:58.646790 kubelet[2825]: I0307 02:04:58.644929 2825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0"} err="failed to get container status \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\": rpc error: code = NotFound desc = an error occurred when try to find container \"12bfc30d2cc1025d51c01d69c2542e23c5df819ca5060de265571b168c7ecea0\": not found" Mar 7 02:04:58.646790 kubelet[2825]: I0307 02:04:58.644953 2825 scope.go:117] "RemoveContainer" containerID="d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b" Mar 7 02:04:58.647047 kubelet[2825]: E0307 02:04:58.645274 2825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\": not found" containerID="d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b" Mar 7 02:04:58.647047 kubelet[2825]: I0307 02:04:58.645298 2825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b"} err="failed to get container status \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4b558ae193a82d4df11dee3e860cec6312c5d2dd0c8dd35319ec0024dafa59b\": not found" Mar 7 02:04:58.647047 kubelet[2825]: I0307 02:04:58.645317 2825 scope.go:117] "RemoveContainer" containerID="6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d" Mar 7 02:04:58.647047 kubelet[2825]: E0307 02:04:58.645564 2825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\": not found" containerID="6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d" Mar 7 02:04:58.647047 kubelet[2825]: I0307 02:04:58.645586 2825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d"} err="failed to get container status \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f0b3c99537baa60616f5c79df28c0991bec89e8617bd50eaff9585b27d5ac1d\": not found" Mar 7 02:04:58.647047 kubelet[2825]: I0307 02:04:58.645605 2825 scope.go:117] "RemoveContainer" containerID="645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72" Mar 7 02:04:58.648257 kubelet[2825]: E0307 02:04:58.646943 2825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\": not found" containerID="645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72" Mar 7 02:04:58.648257 kubelet[2825]: I0307 02:04:58.648176 2825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72"} err="failed to get container status \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\": rpc error: code = NotFound desc = an error occurred when try to find container \"645fbbc636f33adf20779c0c695d7e300e4a851524bfecc5f59d3c61c1e86c72\": not found" Mar 7 02:04:58.999633 sshd[5197]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:59.025962 systemd[1]: sshd@54-10.0.0.132:22-10.0.0.1:48966.service: Deactivated successfully. Mar 7 02:04:59.042410 systemd[1]: session-55.scope: Deactivated successfully. Mar 7 02:04:59.049858 systemd-logind[1563]: Session 55 logged out. Waiting for processes to exit. Mar 7 02:04:59.079548 systemd[1]: Started sshd@55-10.0.0.132:22-10.0.0.1:48968.service - OpenSSH per-connection server daemon (10.0.0.1:48968). Mar 7 02:04:59.082074 systemd-logind[1563]: Removed session 55. Mar 7 02:04:59.242548 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 48968 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:04:59.250972 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:59.315474 systemd-logind[1563]: New session 56 of user core. Mar 7 02:04:59.365868 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 7 02:05:00.249771 kubelet[2825]: I0307 02:05:00.242283 2825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99004e17-b6a7-4fb7-a2b0-86ac5bab0cad" path="/var/lib/kubelet/pods/99004e17-b6a7-4fb7-a2b0-86ac5bab0cad/volumes" Mar 7 02:05:00.249771 kubelet[2825]: I0307 02:05:00.243411 2825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec146893-f95d-4382-9335-1194bbe341a8" path="/var/lib/kubelet/pods/ec146893-f95d-4382-9335-1194bbe341a8/volumes" Mar 7 02:05:01.486420 sshd[5373]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:01.513228 systemd[1]: Started sshd@56-10.0.0.132:22-10.0.0.1:50314.service - OpenSSH per-connection server daemon (10.0.0.1:50314). Mar 7 02:05:01.514119 systemd[1]: sshd@55-10.0.0.132:22-10.0.0.1:48968.service: Deactivated successfully. Mar 7 02:05:01.545038 systemd[1]: session-56.scope: Deactivated successfully. Mar 7 02:05:01.551240 systemd-logind[1563]: Session 56 logged out. Waiting for processes to exit. Mar 7 02:05:01.556204 systemd-logind[1563]: Removed session 56. Mar 7 02:05:01.587763 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 50314 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:01.593334 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:01.622239 systemd-logind[1563]: New session 57 of user core. Mar 7 02:05:01.639524 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 7 02:05:01.645022 kubelet[2825]: I0307 02:05:01.644450 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-lib-modules\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.645022 kubelet[2825]: I0307 02:05:01.644512 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-host-proc-sys-net\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.645022 kubelet[2825]: I0307 02:05:01.644538 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-cilium-run\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.645022 kubelet[2825]: I0307 02:05:01.644560 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-cni-path\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.645022 kubelet[2825]: I0307 02:05:01.644585 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-bpf-maps\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.645022 kubelet[2825]: I0307 02:05:01.644648 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-host-proc-sys-kernel\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647786 kubelet[2825]: I0307 02:05:01.644712 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzq5\" (UniqueName: \"kubernetes.io/projected/adc40cde-ead7-4f19-867b-3b6f8122c272-kube-api-access-8kzq5\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647786 kubelet[2825]: I0307 02:05:01.644745 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-hostproc\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647786 kubelet[2825]: I0307 02:05:01.644769 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-cilium-cgroup\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647786 kubelet[2825]: I0307 02:05:01.644793 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-etc-cni-netd\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647786 kubelet[2825]: I0307 02:05:01.644818 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adc40cde-ead7-4f19-867b-3b6f8122c272-xtables-lock\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647786 kubelet[2825]: I0307 02:05:01.644844 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/adc40cde-ead7-4f19-867b-3b6f8122c272-cilium-ipsec-secrets\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647995 kubelet[2825]: I0307 02:05:01.644870 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adc40cde-ead7-4f19-867b-3b6f8122c272-clustermesh-secrets\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647995 kubelet[2825]: I0307 02:05:01.644892 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adc40cde-ead7-4f19-867b-3b6f8122c272-cilium-config-path\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.647995 kubelet[2825]: I0307 02:05:01.644913 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adc40cde-ead7-4f19-867b-3b6f8122c272-hubble-tls\") pod \"cilium-hf2tt\" (UID: \"adc40cde-ead7-4f19-867b-3b6f8122c272\") " pod="kube-system/cilium-hf2tt" Mar 7 02:05:01.740465 sshd[5385]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:01.769008 systemd[1]: Started sshd@57-10.0.0.132:22-10.0.0.1:50326.service - OpenSSH per-connection server daemon (10.0.0.1:50326). Mar 7 02:05:01.794587 systemd[1]: sshd@56-10.0.0.132:22-10.0.0.1:50314.service: Deactivated successfully. Mar 7 02:05:01.799548 systemd[1]: session-57.scope: Deactivated successfully. Mar 7 02:05:01.804785 systemd-logind[1563]: Session 57 logged out. Waiting for processes to exit. Mar 7 02:05:01.815261 systemd-logind[1563]: Removed session 57. Mar 7 02:05:01.863403 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 50326 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:05:01.870316 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:05:01.883911 kubelet[2825]: E0307 02:05:01.883867 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:01.886816 containerd[1587]: time="2026-03-07T02:05:01.884773246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hf2tt,Uid:adc40cde-ead7-4f19-867b-3b6f8122c272,Namespace:kube-system,Attempt:0,}" Mar 7 02:05:01.898238 systemd-logind[1563]: New session 58 of user core. Mar 7 02:05:01.908517 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 7 02:05:01.966691 containerd[1587]: time="2026-03-07T02:05:01.965118106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:05:01.966691 containerd[1587]: time="2026-03-07T02:05:01.965531928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:05:01.966691 containerd[1587]: time="2026-03-07T02:05:01.965555461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:05:01.966691 containerd[1587]: time="2026-03-07T02:05:01.965754542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:05:02.117833 containerd[1587]: time="2026-03-07T02:05:02.114104612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hf2tt,Uid:adc40cde-ead7-4f19-867b-3b6f8122c272,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\"" Mar 7 02:05:02.118001 kubelet[2825]: E0307 02:05:02.116408 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:02.139862 containerd[1587]: time="2026-03-07T02:05:02.139806284Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 02:05:02.189795 containerd[1587]: time="2026-03-07T02:05:02.189021949Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9907e4e17027398a0d68e92d34755b0677990610d79e4d586c8b2b8362ea7297\"" Mar 7 02:05:02.191432 containerd[1587]: time="2026-03-07T02:05:02.191288355Z" level=info msg="StartContainer for \"9907e4e17027398a0d68e92d34755b0677990610d79e4d586c8b2b8362ea7297\"" Mar 7 02:05:02.373274 kubelet[2825]: E0307 02:05:02.368924 2825 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 02:05:02.407923 containerd[1587]: time="2026-03-07T02:05:02.407725177Z" level=info msg="StartContainer for \"9907e4e17027398a0d68e92d34755b0677990610d79e4d586c8b2b8362ea7297\" returns successfully" Mar 7 02:05:02.488274 kubelet[2825]: E0307 02:05:02.483940 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:02.650895 containerd[1587]: time="2026-03-07T02:05:02.650352784Z" level=info msg="shim disconnected" id=9907e4e17027398a0d68e92d34755b0677990610d79e4d586c8b2b8362ea7297 namespace=k8s.io Mar 7 02:05:02.650895 containerd[1587]: time="2026-03-07T02:05:02.650414089Z" level=warning msg="cleaning up after shim disconnected" id=9907e4e17027398a0d68e92d34755b0677990610d79e4d586c8b2b8362ea7297 namespace=k8s.io Mar 7 02:05:02.650895 containerd[1587]: time="2026-03-07T02:05:02.650430830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:03.504614 kubelet[2825]: E0307 02:05:03.504008 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:03.538736 containerd[1587]: time="2026-03-07T02:05:03.538488457Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 02:05:03.612198 containerd[1587]: time="2026-03-07T02:05:03.609762811Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3300e7261e6663ac769dccc04e455338651daae12205f576dcc7915acdfc4ead\"" Mar 7 02:05:03.612539 containerd[1587]: time="2026-03-07T02:05:03.612267092Z" level=info msg="StartContainer for \"3300e7261e6663ac769dccc04e455338651daae12205f576dcc7915acdfc4ead\"" Mar 7 02:05:03.851755 containerd[1587]: time="2026-03-07T02:05:03.851497144Z" level=info msg="StartContainer for \"3300e7261e6663ac769dccc04e455338651daae12205f576dcc7915acdfc4ead\" returns successfully" Mar 7 02:05:04.012788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3300e7261e6663ac769dccc04e455338651daae12205f576dcc7915acdfc4ead-rootfs.mount: Deactivated successfully. Mar 7 02:05:04.040786 containerd[1587]: time="2026-03-07T02:05:04.038578788Z" level=info msg="shim disconnected" id=3300e7261e6663ac769dccc04e455338651daae12205f576dcc7915acdfc4ead namespace=k8s.io Mar 7 02:05:04.040786 containerd[1587]: time="2026-03-07T02:05:04.038640183Z" level=warning msg="cleaning up after shim disconnected" id=3300e7261e6663ac769dccc04e455338651daae12205f576dcc7915acdfc4ead namespace=k8s.io Mar 7 02:05:04.043252 containerd[1587]: time="2026-03-07T02:05:04.038652266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:04.540754 kubelet[2825]: E0307 02:05:04.536298 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:04.595251 containerd[1587]: time="2026-03-07T02:05:04.589773325Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 02:05:04.732468 containerd[1587]: time="2026-03-07T02:05:04.732349330Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bce9459fa8dea0e0719056d20a13caf5ddd01b5a260d43680ebaff7f6b14df6\"" Mar 7 02:05:04.735958 containerd[1587]: time="2026-03-07T02:05:04.735766221Z" level=info msg="StartContainer for \"0bce9459fa8dea0e0719056d20a13caf5ddd01b5a260d43680ebaff7f6b14df6\"" Mar 7 02:05:05.044916 containerd[1587]: time="2026-03-07T02:05:05.044870927Z" level=info msg="StartContainer for \"0bce9459fa8dea0e0719056d20a13caf5ddd01b5a260d43680ebaff7f6b14df6\" returns successfully" Mar 7 02:05:05.155303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bce9459fa8dea0e0719056d20a13caf5ddd01b5a260d43680ebaff7f6b14df6-rootfs.mount: Deactivated successfully. Mar 7 02:05:05.199319 containerd[1587]: time="2026-03-07T02:05:05.198984548Z" level=info msg="shim disconnected" id=0bce9459fa8dea0e0719056d20a13caf5ddd01b5a260d43680ebaff7f6b14df6 namespace=k8s.io Mar 7 02:05:05.199319 containerd[1587]: time="2026-03-07T02:05:05.199045151Z" level=warning msg="cleaning up after shim disconnected" id=0bce9459fa8dea0e0719056d20a13caf5ddd01b5a260d43680ebaff7f6b14df6 namespace=k8s.io Mar 7 02:05:05.199319 containerd[1587]: time="2026-03-07T02:05:05.199058346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:05.588191 kubelet[2825]: E0307 02:05:05.581747 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:05.655855 containerd[1587]: time="2026-03-07T02:05:05.655212913Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 02:05:05.772213 containerd[1587]: time="2026-03-07T02:05:05.768933382Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aefaa7522f33ea3060fb713758aa89c6c57975d65077c0097ea6543fd67f0d0d\"" Mar 7 02:05:05.772213 containerd[1587]: time="2026-03-07T02:05:05.769887472Z" level=info msg="StartContainer for \"aefaa7522f33ea3060fb713758aa89c6c57975d65077c0097ea6543fd67f0d0d\"" Mar 7 02:05:06.194987 containerd[1587]: time="2026-03-07T02:05:06.193595252Z" level=info msg="StartContainer for \"aefaa7522f33ea3060fb713758aa89c6c57975d65077c0097ea6543fd67f0d0d\" returns successfully" Mar 7 02:05:06.343325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aefaa7522f33ea3060fb713758aa89c6c57975d65077c0097ea6543fd67f0d0d-rootfs.mount: Deactivated successfully. Mar 7 02:05:06.379763 containerd[1587]: time="2026-03-07T02:05:06.378973094Z" level=info msg="shim disconnected" id=aefaa7522f33ea3060fb713758aa89c6c57975d65077c0097ea6543fd67f0d0d namespace=k8s.io Mar 7 02:05:06.379763 containerd[1587]: time="2026-03-07T02:05:06.379093428Z" level=warning msg="cleaning up after shim disconnected" id=aefaa7522f33ea3060fb713758aa89c6c57975d65077c0097ea6543fd67f0d0d namespace=k8s.io Mar 7 02:05:06.379763 containerd[1587]: time="2026-03-07T02:05:06.379105631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:06.481576 containerd[1587]: time="2026-03-07T02:05:06.480379041Z" level=warning msg="cleanup warnings time=\"2026-03-07T02:05:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 02:05:06.640476 kubelet[2825]: E0307 02:05:06.640323 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:06.731745 containerd[1587]: time="2026-03-07T02:05:06.714350568Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 02:05:06.741402 kubelet[2825]: I0307 02:05:06.741299 2825 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T02:05:06Z","lastTransitionTime":"2026-03-07T02:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 02:05:06.994069 containerd[1587]: time="2026-03-07T02:05:06.992530128Z" level=info msg="CreateContainer within sandbox \"5ccfc5e18de0076f02b1f12855df65aa1535912f3c94b3daf3251ac455ae6267\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d9d3bf33ce07a9e47cc47d8b10ebc000ca1216ce322af9d96f646697d9751f25\"" Mar 7 02:05:07.000581 containerd[1587]: time="2026-03-07T02:05:07.000035945Z" level=info msg="StartContainer for \"d9d3bf33ce07a9e47cc47d8b10ebc000ca1216ce322af9d96f646697d9751f25\"" Mar 7 02:05:07.373351 kubelet[2825]: E0307 02:05:07.372020 2825 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 02:05:07.458582 containerd[1587]: time="2026-03-07T02:05:07.455097173Z" level=info msg="StartContainer for \"d9d3bf33ce07a9e47cc47d8b10ebc000ca1216ce322af9d96f646697d9751f25\" returns successfully" Mar 7 02:05:08.738855 kubelet[2825]: E0307 02:05:08.732037 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:08.828345 kubelet[2825]: I0307 02:05:08.827542 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hf2tt" podStartSLOduration=7.827523785 podStartE2EDuration="7.827523785s" podCreationTimestamp="2026-03-07 02:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:05:08.827395541 +0000 UTC m=+453.575087689" watchObservedRunningTime="2026-03-07 02:05:08.827523785 +0000 UTC m=+453.575215922" Mar 7 02:05:09.392097 systemd[1]: run-containerd-runc-k8s.io-d9d3bf33ce07a9e47cc47d8b10ebc000ca1216ce322af9d96f646697d9751f25-runc.TjrfRZ.mount: Deactivated successfully. Mar 7 02:05:09.933374 kubelet[2825]: E0307 02:05:09.932264 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:10.540907 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 02:05:14.612329 systemd[1]: run-containerd-runc-k8s.io-d9d3bf33ce07a9e47cc47d8b10ebc000ca1216ce322af9d96f646697d9751f25-runc.tFk4GR.mount: Deactivated successfully. Mar 7 02:05:16.247249 kubelet[2825]: E0307 02:05:16.235299 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:18.231284 kubelet[2825]: E0307 02:05:18.230498 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:19.167626 systemd-networkd[1242]: lxc_health: Link UP Mar 7 02:05:19.262970 systemd-networkd[1242]: lxc_health: Gained carrier Mar 7 02:05:19.264880 kubelet[2825]: E0307 02:05:19.264850 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:19.889599 kubelet[2825]: E0307 02:05:19.888523 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:20.805627 systemd-networkd[1242]: lxc_health: Gained IPv6LL Mar 7 02:05:20.853636 kubelet[2825]: E0307 02:05:20.850104 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:21.230613 kubelet[2825]: E0307 02:05:21.229734 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:21.855303 kubelet[2825]: E0307 02:05:21.853362 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:24.234570 kubelet[2825]: E0307 02:05:24.233071 2825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:27.351468 sshd[5397]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:27.358458 systemd[1]: sshd@57-10.0.0.132:22-10.0.0.1:50326.service: Deactivated successfully. Mar 7 02:05:27.363626 systemd-logind[1563]: Session 58 logged out. Waiting for processes to exit. Mar 7 02:05:27.364276 systemd[1]: session-58.scope: Deactivated successfully. Mar 7 02:05:27.369456 systemd-logind[1563]: Removed session 58.