Mar 7 01:36:11.326340 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:36:58 -00 2026 Mar 7 01:36:11.326371 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 01:36:11.326387 kernel: BIOS-provided physical RAM map: Mar 7 01:36:11.326397 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:36:11.326406 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:36:11.326416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:36:11.326427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:36:11.326435 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:36:11.326441 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:36:11.326447 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:36:11.326452 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:36:11.326461 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:36:11.326467 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:36:11.326473 kernel: NX (Execute Disable) protection: active Mar 7 01:36:11.326480 kernel: APIC: Static calls initialized Mar 7 01:36:11.326486 kernel: SMBIOS 2.8 present. Mar 7 01:36:11.326495 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:36:11.326501 kernel: DMI: Memory slots populated: 1/1 Mar 7 01:36:11.326507 kernel: Hypervisor detected: KVM Mar 7 01:36:11.326513 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:36:11.326519 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:36:11.326525 kernel: kvm-clock: using sched offset of 11847797857 cycles Mar 7 01:36:11.326532 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:36:11.326538 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:36:11.326544 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:36:11.326551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:36:11.326559 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:36:11.326566 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:36:11.326572 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:36:11.326578 kernel: Using GB pages for direct mapping Mar 7 01:36:11.326585 kernel: ACPI: Early table checksum verification disabled Mar 7 01:36:11.326591 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:36:11.326597 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326603 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326610 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326618 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:36:11.326624 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326631 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326637 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326643 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:11.326653 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:36:11.326662 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:36:11.326668 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:36:11.326675 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:36:11.326681 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:36:11.326688 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:36:11.326694 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:36:11.326701 kernel: No NUMA configuration found Mar 7 01:36:11.326707 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:36:11.326716 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 7 01:36:11.326723 kernel: Zone ranges: Mar 7 01:36:11.326729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:36:11.326736 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:36:11.326742 kernel: Normal empty Mar 7 01:36:11.326749 kernel: Device empty Mar 7 01:36:11.326755 kernel: Movable zone start for each node Mar 7 01:36:11.326762 kernel: Early memory node ranges Mar 7 01:36:11.326768 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:36:11.326774 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:36:11.326783 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:36:11.326790 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:36:11.326796 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:36:11.326803 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:36:11.326809 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:36:11.326816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:36:11.326822 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:36:11.326829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:36:11.326835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:36:11.326844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:36:11.326850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:36:11.326857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:36:11.326863 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:36:11.326869 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:36:11.326876 kernel: TSC deadline timer available Mar 7 01:36:11.326882 kernel: CPU topo: Max. logical packages: 1 Mar 7 01:36:11.326889 kernel: CPU topo: Max. logical dies: 1 Mar 7 01:36:11.326895 kernel: CPU topo: Max. dies per package: 1 Mar 7 01:36:11.326904 kernel: CPU topo: Max. threads per core: 1 Mar 7 01:36:11.326910 kernel: CPU topo: Num. cores per package: 4 Mar 7 01:36:11.326917 kernel: CPU topo: Num. threads per package: 4 Mar 7 01:36:11.326923 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 7 01:36:11.326969 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:36:11.326975 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:36:11.326982 kernel: kvm-guest: setup PV sched yield Mar 7 01:36:11.326988 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:36:11.326995 kernel: Booting paravirtualized kernel on KVM Mar 7 01:36:11.327004 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:36:11.327011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:36:11.327050 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 7 01:36:11.327062 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 7 01:36:11.327071 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:36:11.327080 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:36:11.327091 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:36:11.327105 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 01:36:11.327178 kernel: random: crng init done Mar 7 01:36:11.327189 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:36:11.327198 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:36:11.327205 kernel: Fallback order for Node 0: 0 Mar 7 01:36:11.327322 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 7 01:36:11.327330 kernel: Policy zone: DMA32 Mar 7 01:36:11.327338 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:36:11.327349 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:36:11.327361 kernel: ftrace: allocating 40099 entries in 157 pages Mar 7 01:36:11.327375 kernel: ftrace: allocated 157 pages with 5 groups Mar 7 01:36:11.327385 kernel: Dynamic Preempt: voluntary Mar 7 01:36:11.327396 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:36:11.327409 kernel: rcu: RCU event tracing is enabled. Mar 7 01:36:11.327419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:36:11.327431 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:36:11.327443 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:36:11.327450 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:36:11.327457 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:36:11.327466 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:36:11.327473 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:36:11.327480 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:36:11.327486 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:36:11.327493 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:36:11.327499 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:36:11.327513 kernel: Console: colour VGA+ 80x25 Mar 7 01:36:11.327522 kernel: printk: legacy console [ttyS0] enabled Mar 7 01:36:11.327529 kernel: ACPI: Core revision 20240827 Mar 7 01:36:11.327535 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:36:11.327542 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:36:11.327549 kernel: x2apic enabled Mar 7 01:36:11.327558 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:36:11.327565 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:36:11.327571 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:36:11.327578 kernel: kvm-guest: setup PV IPIs Mar 7 01:36:11.327584 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:36:11.327594 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 7 01:36:11.327600 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:36:11.327607 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:36:11.327614 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:36:11.327621 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:36:11.327627 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:36:11.327634 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:36:11.327641 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:36:11.327649 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:36:11.327656 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:36:11.327664 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:36:11.327670 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:36:11.327677 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:36:11.327684 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:36:11.327691 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:36:11.327697 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:36:11.327704 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:36:11.327714 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:36:11.327720 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:36:11.327727 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:36:11.327734 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:36:11.327741 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:36:11.327747 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 7 01:36:11.327754 kernel: landlock: Up and running. Mar 7 01:36:11.327761 kernel: SELinux: Initializing. Mar 7 01:36:11.327767 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:36:11.327776 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:36:11.327783 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:36:11.327790 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:36:11.327797 kernel: signal: max sigframe size: 1776 Mar 7 01:36:11.327803 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:36:11.327810 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:36:11.327817 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 7 01:36:11.327823 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:36:11.327830 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:36:11.327839 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:36:11.327846 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:36:11.327852 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:36:11.327859 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:36:11.327866 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145096K reserved, 0K cma-reserved) Mar 7 01:36:11.327873 kernel: devtmpfs: initialized Mar 7 01:36:11.327879 kernel: x86/mm: Memory block size: 128MB Mar 7 01:36:11.327886 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:36:11.327893 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:36:11.327901 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:36:11.327908 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:36:11.327915 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:36:11.327922 kernel: audit: type=2000 audit(1772847366.174:1): state=initialized audit_enabled=0 res=1 Mar 7 01:36:11.327928 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:36:11.327935 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:36:11.327942 kernel: cpuidle: using governor menu Mar 7 01:36:11.327948 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:36:11.327955 kernel: dca service started, version 1.12.1 Mar 7 01:36:11.327964 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 7 01:36:11.327970 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:36:11.327977 kernel: PCI: Using configuration type 1 for base access Mar 7 01:36:11.327984 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:36:11.327991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:36:11.327997 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:36:11.328004 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:36:11.328011 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:36:11.328020 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:36:11.328028 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:36:11.328041 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:36:11.328054 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:36:11.328064 kernel: ACPI: Interpreter enabled Mar 7 01:36:11.328073 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:36:11.328086 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:36:11.328097 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:36:11.328108 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:36:11.328167 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:36:11.328178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:36:11.328662 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:36:11.328796 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:36:11.328916 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:36:11.328926 kernel: PCI host bridge to bus 0000:00 Mar 7 01:36:11.329163 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:36:11.329377 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:36:11.329532 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:36:11.329644 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:36:11.329755 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:36:11.329863 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:36:11.329998 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:36:11.330466 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 7 01:36:11.330698 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 7 01:36:11.330847 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 7 01:36:11.330967 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 7 01:36:11.331169 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:36:11.331412 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:36:11.331644 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 7 01:36:11.331934 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 7 01:36:11.332314 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 7 01:36:11.332559 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:36:11.332865 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 7 01:36:11.333044 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 7 01:36:11.333338 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 7 01:36:11.333524 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:36:11.333769 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 7 01:36:11.333931 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 7 01:36:11.334075 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 7 01:36:11.334331 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:36:11.334509 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:36:11.334738 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 7 01:36:11.334860 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:36:11.335046 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 7 01:36:11.335290 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 7 01:36:11.335455 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 7 01:36:11.335671 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 7 01:36:11.335845 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 7 01:36:11.335865 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:36:11.335878 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:36:11.335893 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:36:11.335900 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:36:11.335907 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:36:11.335914 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:36:11.335921 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:36:11.335928 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:36:11.335935 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:36:11.335942 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:36:11.335948 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:36:11.335958 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:36:11.335964 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:36:11.335971 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:36:11.335978 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:36:11.335984 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:36:11.335991 kernel: iommu: Default domain type: Translated Mar 7 01:36:11.335998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:36:11.336005 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:36:11.336012 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:36:11.336021 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:36:11.336028 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:36:11.336294 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:36:11.336456 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:36:11.336632 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:36:11.336644 kernel: vgaarb: loaded Mar 7 01:36:11.336652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:36:11.336659 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:36:11.336670 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:36:11.336677 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:36:11.336684 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:36:11.336691 kernel: pnp: PnP ACPI init Mar 7 01:36:11.337056 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:36:11.337076 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:36:11.337087 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:36:11.337097 kernel: NET: Registered PF_INET protocol family Mar 7 01:36:11.337108 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:36:11.337180 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:36:11.337191 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:36:11.337202 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:36:11.337285 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:36:11.337300 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:36:11.337312 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:36:11.337323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:36:11.337330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:36:11.337343 kernel: NET: Registered PF_XDP protocol family Mar 7 01:36:11.337474 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:36:11.337644 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:36:11.337813 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:36:11.337928 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:36:11.338108 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:36:11.338312 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:36:11.338323 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:36:11.338331 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 7 01:36:11.338342 kernel: Initialise system trusted keyrings Mar 7 01:36:11.338349 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:36:11.338356 kernel: Key type asymmetric registered Mar 7 01:36:11.338363 kernel: Asymmetric key parser 'x509' registered Mar 7 01:36:11.338370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 01:36:11.338377 kernel: io scheduler mq-deadline registered Mar 7 01:36:11.338384 kernel: io scheduler kyber registered Mar 7 01:36:11.338391 kernel: io scheduler bfq registered Mar 7 01:36:11.338398 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:36:11.338408 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:36:11.338415 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:36:11.338421 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:36:11.338428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:36:11.338441 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:36:11.338454 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:36:11.338464 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:36:11.338474 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:36:11.338767 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:36:11.338790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:36:11.338939 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:36:11.339081 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:36:10 UTC (1772847370) Mar 7 01:36:11.339341 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:36:11.339357 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:36:11.339370 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:36:11.339383 kernel: Segment Routing with IPv6 Mar 7 01:36:11.339397 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:36:11.339407 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:36:11.339418 kernel: Key type dns_resolver registered Mar 7 01:36:11.339430 kernel: IPI shorthand broadcast: enabled Mar 7 01:36:11.339441 kernel: sched_clock: Marking stable (4247038351, 527577625)->(5026619484, -252003508) Mar 7 01:36:11.339451 kernel: registered taskstats version 1 Mar 7 01:36:11.339506 kernel: Loading compiled-in X.509 certificates Mar 7 01:36:11.339555 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 4993b830947107214da89b35109513d59d4558ae' Mar 7 01:36:11.339605 kernel: Demotion targets for Node 0: null Mar 7 01:36:11.339617 kernel: Key type .fscrypt registered Mar 7 01:36:11.339634 kernel: Key type fscrypt-provisioning registered Mar 7 01:36:11.339645 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:36:11.339655 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:36:11.339666 kernel: ima: No architecture policies found Mar 7 01:36:11.339677 kernel: clk: Disabling unused clocks Mar 7 01:36:11.339688 kernel: Warning: unable to open an initial console. Mar 7 01:36:11.339749 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 7 01:36:11.339760 kernel: Write protecting the kernel read-only data: 40960k Mar 7 01:36:11.339774 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 7 01:36:11.339785 kernel: Run /init as init process Mar 7 01:36:11.339796 kernel: with arguments: Mar 7 01:36:11.339807 kernel: /init Mar 7 01:36:11.339818 kernel: with environment: Mar 7 01:36:11.339829 kernel: HOME=/ Mar 7 01:36:11.339840 kernel: TERM=linux Mar 7 01:36:11.339852 systemd[1]: Successfully made /usr/ read-only. Mar 7 01:36:11.339870 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 01:36:11.339887 systemd[1]: Detected virtualization kvm. Mar 7 01:36:11.339899 systemd[1]: Detected architecture x86-64. Mar 7 01:36:11.339911 systemd[1]: Running in initrd. Mar 7 01:36:11.339923 systemd[1]: No hostname configured, using default hostname. Mar 7 01:36:11.339934 systemd[1]: Hostname set to . Mar 7 01:36:11.339949 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:36:11.339960 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:36:11.339977 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:36:11.340006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:36:11.340022 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:36:11.340036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:36:11.340049 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:36:11.340066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:36:11.340081 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:36:11.340094 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:36:11.340108 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:36:11.340175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:36:11.340189 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:36:11.340202 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:36:11.340307 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:36:11.340328 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:36:11.340339 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:36:11.340353 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:36:11.340366 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:36:11.340379 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 7 01:36:11.340390 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:36:11.340404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:36:11.340416 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:36:11.340429 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:36:11.340446 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:36:11.340459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:36:11.340473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:36:11.340484 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 7 01:36:11.340499 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:36:11.340510 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:36:11.340523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:36:11.340536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:11.340554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:36:11.340646 systemd-journald[200]: Collecting audit messages is disabled. Mar 7 01:36:11.340759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:36:11.340775 systemd-journald[200]: Journal started Mar 7 01:36:11.340803 systemd-journald[200]: Runtime Journal (/run/log/journal/01a73a5034aa4908baa5b57566f54268) is 6M, max 48.3M, 42.2M free. Mar 7 01:36:11.338309 systemd-modules-load[203]: Inserted module 'overlay' Mar 7 01:36:11.348342 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:36:11.359510 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:36:11.368934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:36:11.377047 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:36:11.410406 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:36:11.412368 kernel: Bridge firewalling registered Mar 7 01:36:11.412302 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 7 01:36:11.420851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:36:11.430845 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 7 01:36:11.724370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:11.730068 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:36:11.740052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:36:11.759730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:36:11.761087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:36:11.783591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:36:11.802909 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:36:11.805548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:36:11.808626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:36:11.838834 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:36:11.841020 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:36:11.877932 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 01:36:11.894416 systemd-resolved[240]: Positive Trust Anchors: Mar 7 01:36:11.894426 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:36:11.894451 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:36:11.897014 systemd-resolved[240]: Defaulting to hostname 'linux'. Mar 7 01:36:11.898475 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:36:11.908732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:36:12.121376 kernel: SCSI subsystem initialized Mar 7 01:36:12.133289 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:36:12.148363 kernel: iscsi: registered transport (tcp) Mar 7 01:36:12.180471 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:36:12.180605 kernel: QLogic iSCSI HBA Driver Mar 7 01:36:12.219331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:36:12.260594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:36:12.263548 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:36:12.353420 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:36:12.355570 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:36:12.455349 kernel: raid6: avx2x4 gen() 21607 MB/s Mar 7 01:36:12.474347 kernel: raid6: avx2x2 gen() 21540 MB/s Mar 7 01:36:12.495379 kernel: raid6: avx2x1 gen() 12974 MB/s Mar 7 01:36:12.495471 kernel: raid6: using algorithm avx2x4 gen() 21607 MB/s Mar 7 01:36:12.517368 kernel: raid6: .... xor() 5229 MB/s, rmw enabled Mar 7 01:36:12.517514 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:36:12.545338 kernel: xor: automatically using best checksumming function avx Mar 7 01:36:12.812347 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:36:12.826764 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:36:12.835290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:36:12.894891 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 7 01:36:12.905655 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:36:12.921798 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:36:12.988798 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 7 01:36:13.115799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:36:13.118640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:36:13.342991 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:36:13.357042 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:36:13.419784 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:36:13.434353 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:36:13.439311 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:36:13.459370 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:36:13.459585 kernel: GPT:9289727 != 19775487 Mar 7 01:36:13.459634 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:36:13.459674 kernel: GPT:9289727 != 19775487 Mar 7 01:36:13.459738 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:36:13.459778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:13.489993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:36:13.490584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:13.519628 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 7 01:36:13.520669 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:13.543020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:13.559307 kernel: libata version 3.00 loaded. Mar 7 01:36:13.558045 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 01:36:13.569313 kernel: AES CTR mode by8 optimization enabled Mar 7 01:36:13.575297 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:36:13.575630 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:36:13.586293 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 7 01:36:13.586596 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 7 01:36:13.586844 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:36:13.611395 kernel: scsi host0: ahci Mar 7 01:36:13.611403 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:36:13.621289 kernel: scsi host1: ahci Mar 7 01:36:13.625371 kernel: scsi host2: ahci Mar 7 01:36:13.629286 kernel: scsi host3: ahci Mar 7 01:36:13.635613 kernel: scsi host4: ahci Mar 7 01:36:13.638502 kernel: scsi host5: ahci Mar 7 01:36:13.643545 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 7 01:36:13.643576 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 7 01:36:13.643592 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 7 01:36:13.643605 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 7 01:36:13.643619 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 7 01:36:13.643632 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 7 01:36:13.656362 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:36:13.989587 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:36:13.989640 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:13.989658 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:13.989675 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:13.989689 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:13.989702 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:13.989716 kernel: ata3.00: LPM support broken, forcing max_power Mar 7 01:36:13.948419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:36:14.028805 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:36:14.028836 kernel: ata3.00: applying bridge limits Mar 7 01:36:14.028851 kernel: ata3.00: LPM support broken, forcing max_power Mar 7 01:36:14.028867 kernel: ata3.00: configured for UDMA/100 Mar 7 01:36:14.028881 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:36:13.988811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:14.033773 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:36:14.034024 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:36:14.047573 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:36:14.117760 disk-uuid[619]: Primary Header is updated. Mar 7 01:36:14.117760 disk-uuid[619]: Secondary Entries is updated. Mar 7 01:36:14.117760 disk-uuid[619]: Secondary Header is updated. Mar 7 01:36:14.139505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:14.155327 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:36:14.155690 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:36:14.164324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:14.193335 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:36:14.734552 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:36:14.735679 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:36:14.748088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:36:14.775616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:36:14.796395 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:36:14.841742 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:36:15.175371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:15.176349 disk-uuid[620]: The operation has completed successfully. Mar 7 01:36:15.252299 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:36:15.252571 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:36:15.328641 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:36:15.361907 sh[648]: Success Mar 7 01:36:15.398631 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:36:15.398717 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:36:15.403715 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 7 01:36:15.425507 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 7 01:36:15.491758 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:36:15.494919 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:36:15.524832 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:36:15.554917 kernel: BTRFS: device fsid 13a9d0ca-821a-4a58-bd70-d4baef218662 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (661) Mar 7 01:36:15.554967 kernel: BTRFS info (device dm-0): first mount of filesystem 13a9d0ca-821a-4a58-bd70-d4baef218662 Mar 7 01:36:15.554984 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:15.591391 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 7 01:36:15.591458 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 7 01:36:15.594298 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:36:15.604358 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 7 01:36:15.611770 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:36:15.613413 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:36:15.624414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:36:15.711322 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (696) Mar 7 01:36:15.716306 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:36:15.716365 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:15.742592 kernel: BTRFS info (device vda6): turning on async discard Mar 7 01:36:15.742667 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 01:36:15.758569 kernel: BTRFS info (device vda6): last unmount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:36:15.765489 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:36:15.767783 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:36:16.011992 ignition[751]: Ignition 2.22.0 Mar 7 01:36:16.012013 ignition[751]: Stage: fetch-offline Mar 7 01:36:16.012062 ignition[751]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:16.012077 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:16.012352 ignition[751]: parsed url from cmdline: "" Mar 7 01:36:16.012360 ignition[751]: no config URL provided Mar 7 01:36:16.012369 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:36:16.038844 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:36:16.012382 ignition[751]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:36:16.057875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:36:16.012426 ignition[751]: op(1): [started] loading QEMU firmware config module Mar 7 01:36:16.012433 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:36:16.043984 ignition[751]: op(1): [finished] loading QEMU firmware config module Mar 7 01:36:16.177397 systemd-networkd[838]: lo: Link UP Mar 7 01:36:16.177442 systemd-networkd[838]: lo: Gained carrier Mar 7 01:36:16.210362 systemd-networkd[838]: Enumeration completed Mar 7 01:36:16.212462 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:36:16.243870 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:16.243928 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:36:16.244983 systemd-networkd[838]: eth0: Link UP Mar 7 01:36:16.246611 systemd[1]: Reached target network.target - Network. Mar 7 01:36:16.274532 systemd-networkd[838]: eth0: Gained carrier Mar 7 01:36:16.274555 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:16.340441 systemd-networkd[838]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:36:16.609030 ignition[751]: parsing config with SHA512: 4fb295eee1d90bde94b629a63521c18259985e996d131aff75c9a33ba477108c4ce2f7557cafdeb47553c2c61825ce91b9aacb957cd54401b23dd64867ddbb0f Mar 7 01:36:16.631537 unknown[751]: fetched base config from "system" Mar 7 01:36:16.631576 unknown[751]: fetched user config from "qemu" Mar 7 01:36:16.631910 ignition[751]: fetch-offline: fetch-offline passed Mar 7 01:36:16.645031 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:36:16.631972 ignition[751]: Ignition finished successfully Mar 7 01:36:16.655129 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:36:16.663082 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:36:16.774592 ignition[843]: Ignition 2.22.0 Mar 7 01:36:16.774602 ignition[843]: Stage: kargs Mar 7 01:36:16.774749 ignition[843]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:16.774760 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:16.778043 ignition[843]: kargs: kargs passed Mar 7 01:36:16.778124 ignition[843]: Ignition finished successfully Mar 7 01:36:16.823884 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:36:16.838067 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:36:16.957519 ignition[851]: Ignition 2.22.0 Mar 7 01:36:16.957732 ignition[851]: Stage: disks Mar 7 01:36:16.958831 ignition[851]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:16.958847 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:16.959647 ignition[851]: disks: disks passed Mar 7 01:36:16.959694 ignition[851]: Ignition finished successfully Mar 7 01:36:16.999321 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:36:16.999861 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:36:17.002123 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:36:17.006334 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:36:17.006409 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:36:17.006461 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:36:17.008467 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:36:17.109918 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 7 01:36:17.124637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:36:17.134480 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:36:17.576723 systemd-networkd[838]: eth0: Gained IPv6LL Mar 7 01:36:17.716113 kernel: EXT4-fs (vda9): mounted filesystem 7661fa34-1ec8-43b3-a7b4-2fe8e4393215 r/w with ordered data mode. Quota mode: none. Mar 7 01:36:17.716907 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:36:17.723723 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:36:17.746511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:36:17.763028 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:36:17.764598 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:36:17.764665 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:36:17.764700 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:36:17.834638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:36:17.851611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:36:17.895466 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Mar 7 01:36:17.895505 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:36:17.895523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:17.923129 kernel: BTRFS info (device vda6): turning on async discard Mar 7 01:36:17.923344 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 01:36:17.930499 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:36:18.099090 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:36:18.130807 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:36:18.147119 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:36:18.168570 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:36:18.519918 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:36:18.527774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:36:18.548367 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:36:18.567331 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:36:18.599964 kernel: BTRFS info (device vda6): last unmount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:36:18.634640 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:36:18.702894 ignition[983]: INFO : Ignition 2.22.0 Mar 7 01:36:18.709033 ignition[983]: INFO : Stage: mount Mar 7 01:36:18.709033 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:18.709033 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:18.709033 ignition[983]: INFO : mount: mount passed Mar 7 01:36:18.709033 ignition[983]: INFO : Ignition finished successfully Mar 7 01:36:18.732312 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:36:18.737456 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:36:18.786820 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:36:18.846120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (996) Mar 7 01:36:18.861833 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:36:18.861913 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:18.881737 kernel: BTRFS info (device vda6): turning on async discard Mar 7 01:36:18.881819 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 01:36:18.894662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:36:18.980142 ignition[1013]: INFO : Ignition 2.22.0 Mar 7 01:36:18.980142 ignition[1013]: INFO : Stage: files Mar 7 01:36:18.980142 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:18.980142 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:19.017435 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:36:19.017435 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:36:19.017435 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:36:19.017435 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:36:19.017435 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:36:19.017435 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:36:19.017435 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:36:19.017435 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:36:19.009329 unknown[1013]: wrote ssh authorized keys file for user: core Mar 7 01:36:19.123782 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:36:19.302263 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:36:19.302263 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:36:19.326917 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:36:19.447073 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:36:19.735820 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:36:19.735820 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:36:19.769819 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:36:19.953386 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:36:23.824568 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3960495488 wd_nsec: 3960493604 Mar 7 01:36:26.465774 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:36:26.465774 ignition[1013]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 7 01:36:26.494049 ignition[1013]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:36:26.716152 ignition[1013]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:36:26.739564 ignition[1013]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:36:26.748835 ignition[1013]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:36:26.748835 ignition[1013]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:36:26.748835 ignition[1013]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:36:26.748835 ignition[1013]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:36:26.748835 ignition[1013]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:36:26.748835 ignition[1013]: INFO : files: files passed Mar 7 01:36:26.748835 ignition[1013]: INFO : Ignition finished successfully Mar 7 01:36:26.804120 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:36:26.818446 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:36:26.832023 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:36:26.858957 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:36:26.859192 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:36:26.883403 initrd-setup-root-after-ignition[1041]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:36:26.894146 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:36:26.894146 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:36:26.919498 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:36:26.900409 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:36:26.910509 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:36:26.930537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:36:27.059789 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:36:27.060058 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:36:27.083486 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:36:27.083724 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:36:27.109559 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:36:27.121688 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:36:27.203088 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:36:27.222537 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:36:27.273438 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:36:27.283505 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:36:27.300640 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:36:27.305771 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:36:27.305965 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:36:27.330807 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:36:27.331078 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:36:27.348192 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:36:27.357556 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:36:27.357898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:36:27.373809 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 7 01:36:27.379121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:36:27.389454 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:36:27.401147 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:36:27.426445 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:36:27.433062 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:36:27.439508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:36:27.439764 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:36:27.447096 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:36:27.452499 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:36:27.471419 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:36:27.483928 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:36:27.496031 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:36:27.496547 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:36:27.513963 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:36:27.514413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:36:27.525027 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:36:27.546104 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:36:27.546650 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:36:27.561046 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:36:27.571689 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:36:27.584946 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:36:27.585084 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:36:27.610575 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:36:27.610728 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:36:27.616043 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:36:27.616427 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:36:27.626684 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:36:27.626882 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:36:27.640872 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:36:27.654054 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:36:27.654499 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:36:27.667582 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:36:27.674520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:36:27.674750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:36:27.691371 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:36:27.740664 ignition[1068]: INFO : Ignition 2.22.0 Mar 7 01:36:27.740664 ignition[1068]: INFO : Stage: umount Mar 7 01:36:27.740664 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:27.740664 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:27.740664 ignition[1068]: INFO : umount: umount passed Mar 7 01:36:27.740664 ignition[1068]: INFO : Ignition finished successfully Mar 7 01:36:27.692379 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:36:27.723203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:36:27.723563 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:36:27.736836 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:36:27.737931 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:36:27.738176 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:36:27.746534 systemd[1]: Stopped target network.target - Network. Mar 7 01:36:27.749974 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:36:27.750087 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:36:27.756968 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:36:27.757045 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:36:27.765641 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:36:27.765774 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:36:27.784390 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:36:27.784487 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:36:27.794847 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:36:27.805769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:36:27.819775 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:36:27.819954 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:36:27.831666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:36:27.831792 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:36:27.853032 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:36:27.853773 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:36:27.877207 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 7 01:36:27.878749 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:36:27.878917 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:36:27.895481 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 7 01:36:27.896101 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:36:27.896644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:36:27.915639 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 7 01:36:27.919151 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 7 01:36:27.925626 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:36:27.925716 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:36:27.967586 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:36:27.967684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:36:27.967794 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:36:27.967976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:36:27.968110 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:36:27.990073 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:36:27.990182 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:36:28.001853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:36:28.010958 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 7 01:36:28.052821 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:36:28.053047 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:36:28.067443 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:36:28.067868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:36:28.073060 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:36:28.073178 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:36:28.083585 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:36:28.083643 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:36:28.087860 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:36:28.087938 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:36:28.119160 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:36:28.119671 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:36:28.131506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:36:28.131661 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:36:28.154564 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:36:28.168676 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 7 01:36:28.168842 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:36:28.186912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:36:28.187080 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:36:28.203955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:36:28.204076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:28.245061 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:36:28.245370 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:36:28.255047 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:36:28.271615 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:36:28.324122 systemd[1]: Switching root. Mar 7 01:36:28.374038 systemd-journald[200]: Journal stopped Mar 7 01:36:30.522083 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Mar 7 01:36:30.522417 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:36:30.522491 kernel: SELinux: policy capability open_perms=1 Mar 7 01:36:30.522539 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:36:30.522550 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:36:30.522561 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:36:30.522572 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:36:30.522588 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:36:30.522598 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:36:30.522608 kernel: SELinux: policy capability userspace_initial_context=0 Mar 7 01:36:30.522619 kernel: audit: type=1403 audit(1772847388.652:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:36:30.522669 systemd[1]: Successfully loaded SELinux policy in 107.490ms. Mar 7 01:36:30.522694 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.917ms. Mar 7 01:36:30.522706 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 01:36:30.522717 systemd[1]: Detected virtualization kvm. Mar 7 01:36:30.522728 systemd[1]: Detected architecture x86-64. Mar 7 01:36:30.522739 systemd[1]: Detected first boot. Mar 7 01:36:30.522749 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:36:30.522760 zram_generator::config[1113]: No configuration found. Mar 7 01:36:30.522804 kernel: Guest personality initialized and is inactive Mar 7 01:36:30.522845 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 7 01:36:30.522856 kernel: Initialized host personality Mar 7 01:36:30.522867 kernel: NET: Registered PF_VSOCK protocol family Mar 7 01:36:30.522877 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:36:30.522889 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 7 01:36:30.522900 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:36:30.522910 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:36:30.522921 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:36:30.522962 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:36:30.522974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:36:30.522985 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:36:30.522995 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:36:30.523006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:36:30.523017 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:36:30.523029 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:36:30.523049 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:36:30.523067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:36:30.523136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:36:30.523157 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:36:30.523174 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:36:30.523185 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:36:30.523198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:36:30.523318 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:36:30.523332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:36:30.523379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:36:30.523391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:36:30.523402 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:36:30.523413 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:36:30.523423 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:36:30.523434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:36:30.523445 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:36:30.523455 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:36:30.523496 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:36:30.523507 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:36:30.523548 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:36:30.523559 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 7 01:36:30.523617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:36:30.523683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:36:30.523705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:36:30.523723 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:36:30.523734 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:36:30.523745 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:36:30.523757 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:36:30.523810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:30.523823 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:36:30.523863 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:36:30.523874 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:36:30.523885 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:36:30.523896 systemd[1]: Reached target machines.target - Containers. Mar 7 01:36:30.523907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:36:30.523918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:30.523960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:36:30.523971 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:36:30.523982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:36:30.524021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:36:30.524037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:36:30.524056 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:36:30.524073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:36:30.524088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:36:30.524155 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:36:30.524176 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:36:30.524187 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:36:30.524199 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:36:30.524322 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 01:36:30.524334 kernel: ACPI: bus type drm_connector registered Mar 7 01:36:30.524346 kernel: fuse: init (API version 7.41) Mar 7 01:36:30.524357 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:36:30.524367 kernel: loop: module loaded Mar 7 01:36:30.524413 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:36:30.524426 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:36:30.524437 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:36:30.524475 systemd-journald[1198]: Collecting audit messages is disabled. Mar 7 01:36:30.524498 systemd-journald[1198]: Journal started Mar 7 01:36:30.524518 systemd-journald[1198]: Runtime Journal (/run/log/journal/01a73a5034aa4908baa5b57566f54268) is 6M, max 48.3M, 42.2M free. Mar 7 01:36:29.720534 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:36:29.743743 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:36:29.744882 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:36:29.745658 systemd[1]: systemd-journald.service: Consumed 1.453s CPU time. Mar 7 01:36:30.543388 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 7 01:36:30.576415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:36:30.588660 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:36:30.588736 systemd[1]: Stopped verity-setup.service. Mar 7 01:36:30.593590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:30.616887 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:36:30.618444 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:36:30.623630 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:36:30.629610 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:36:30.635641 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:36:30.640766 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:36:30.646444 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:36:30.650955 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:36:30.656667 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:36:30.665155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:36:30.665675 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:36:30.672524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:36:30.672941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:36:30.680011 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:36:30.680787 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:36:30.688997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:36:30.689489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:36:30.695585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:36:30.695915 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:36:30.701023 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:36:30.701607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:36:30.708110 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:36:30.714588 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:36:30.720936 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:36:30.727711 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 7 01:36:30.734193 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:36:30.758528 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:36:30.766657 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:36:30.772857 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:36:30.778410 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:36:30.778512 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:36:30.786595 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 7 01:36:30.799093 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:36:30.806464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:30.810696 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:36:30.817089 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:36:30.822655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:36:30.825485 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:36:30.834425 systemd-journald[1198]: Time spent on flushing to /var/log/journal/01a73a5034aa4908baa5b57566f54268 is 17.006ms for 974 entries. Mar 7 01:36:30.834425 systemd-journald[1198]: System Journal (/var/log/journal/01a73a5034aa4908baa5b57566f54268) is 8M, max 195.6M, 187.6M free. Mar 7 01:36:30.883083 systemd-journald[1198]: Received client request to flush runtime journal. Mar 7 01:36:30.830545 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:36:30.834596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:36:30.847457 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:36:30.855384 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:36:30.866195 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:36:30.879966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:36:30.889943 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:36:30.897615 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:36:30.908570 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:36:30.911310 kernel: loop0: detected capacity change from 0 to 128560 Mar 7 01:36:30.921626 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 7 01:36:30.930979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:36:30.947313 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:36:30.948922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:36:30.958401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:36:30.983358 kernel: loop1: detected capacity change from 0 to 110984 Mar 7 01:36:30.988932 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:36:30.992555 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 7 01:36:31.005833 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 7 01:36:31.005901 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 7 01:36:31.016970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:36:31.044417 kernel: loop2: detected capacity change from 0 to 219192 Mar 7 01:36:31.095398 kernel: loop3: detected capacity change from 0 to 128560 Mar 7 01:36:31.124346 kernel: loop4: detected capacity change from 0 to 110984 Mar 7 01:36:31.152594 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 01:36:31.182871 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:36:31.183934 (sd-merge)[1257]: Merged extensions into '/usr'. Mar 7 01:36:31.190709 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:36:31.190760 systemd[1]: Reloading... Mar 7 01:36:31.276399 zram_generator::config[1279]: No configuration found. Mar 7 01:36:31.391770 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:36:31.789182 systemd[1]: Reloading finished in 596 ms. Mar 7 01:36:32.077856 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:36:32.110794 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:36:32.157355 systemd[1]: Starting ensure-sysext.service... Mar 7 01:36:32.164396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:36:32.225088 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:36:32.225113 systemd[1]: Reloading... Mar 7 01:36:32.260438 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 7 01:36:32.260561 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 7 01:36:32.262073 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:36:32.263109 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:36:32.267165 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:36:32.267737 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Mar 7 01:36:32.268089 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Mar 7 01:36:32.317656 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:36:32.326805 systemd-tmpfiles[1321]: Skipping /boot Mar 7 01:36:32.427980 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:36:32.428002 systemd-tmpfiles[1321]: Skipping /boot Mar 7 01:36:32.563701 zram_generator::config[1354]: No configuration found. Mar 7 01:36:33.171036 systemd[1]: Reloading finished in 945 ms. Mar 7 01:36:33.210054 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:36:33.243700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:36:33.268115 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 01:36:33.276160 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:36:33.307024 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:36:33.315935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:36:33.327937 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:36:33.345745 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:36:33.382659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:33.382917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:33.393518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:36:33.407764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:36:33.416136 systemd-udevd[1397]: Using default interface naming scheme 'v255'. Mar 7 01:36:33.418374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:36:33.420557 augenrules[1414]: No rules Mar 7 01:36:33.425515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:33.425716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 01:36:33.437707 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:36:33.443634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:33.447990 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:36:33.448766 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 01:36:33.456440 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:36:33.465435 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:36:33.474662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:36:33.475074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:36:33.493756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:36:33.494156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:36:33.502658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:36:33.511622 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:36:33.513052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:36:33.548025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:33.549522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:33.552563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:36:33.562685 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:36:33.574593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:36:33.585067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:33.586670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 01:36:33.593922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:36:33.603110 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:36:33.610736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:33.612389 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:36:33.620010 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:36:33.635899 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:36:33.639612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:36:33.647839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:36:33.648741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:36:33.658860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:36:33.659109 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:36:33.927964 systemd[1]: Finished ensure-sysext.service. Mar 7 01:36:33.938117 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:36:33.941732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:33.945992 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 01:36:33.952632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:33.961205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:36:33.999485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:33.999609 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 01:36:33.999693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:36:33.999760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:36:34.008324 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:36:34.020004 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:36:34.020043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:34.052027 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:36:34.059466 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:36:34.059755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:36:34.069433 augenrules[1473]: /sbin/augenrules: No change Mar 7 01:36:34.368652 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:36:34.376643 augenrules[1498]: No rules Mar 7 01:36:34.380892 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:36:34.382153 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 01:36:34.427361 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:36:34.432082 systemd-resolved[1390]: Positive Trust Anchors: Mar 7 01:36:34.432138 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:36:34.432182 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:36:34.441424 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:36:34.441444 systemd-networkd[1458]: lo: Link UP Mar 7 01:36:34.441450 systemd-networkd[1458]: lo: Gained carrier Mar 7 01:36:34.445924 systemd-resolved[1390]: Defaulting to hostname 'linux'. Mar 7 01:36:34.447420 systemd-networkd[1458]: Enumeration completed Mar 7 01:36:34.448470 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:36:34.452982 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:34.452993 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:36:34.454791 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:36:34.455274 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:34.455365 systemd-networkd[1458]: eth0: Link UP Mar 7 01:36:34.455532 systemd-networkd[1458]: eth0: Gained carrier Mar 7 01:36:34.455590 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:34.458846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:36:34.468512 systemd[1]: Reached target network.target - Network. Mar 7 01:36:34.474019 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:36:34.493705 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:36:34.494393 systemd-networkd[1458]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:36:34.503895 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 7 01:36:34.515190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:36:34.566619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:36:34.579949 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 7 01:36:34.601934 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:36:34.609072 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:36:35.101280 systemd-resolved[1390]: Clock change detected. Flushing caches. Mar 7 01:36:35.101463 systemd-timesyncd[1476]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:36:35.101940 systemd-timesyncd[1476]: Initial clock synchronization to Sat 2026-03-07 01:36:35.101047 UTC. Mar 7 01:36:35.106156 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:36:35.113567 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:36:35.121410 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 7 01:36:35.127911 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:36:35.135139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:36:35.135317 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:36:35.140652 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:36:35.147572 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:36:35.155914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:36:35.163309 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:36:35.175858 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:36:35.188547 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:36:35.189022 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:36:35.201618 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:36:35.215071 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 7 01:36:35.223310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 7 01:36:35.229793 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 7 01:36:35.482884 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:36:35.490492 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 7 01:36:35.502987 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:36:35.525537 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:36:35.530535 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:36:35.535949 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:36:35.536098 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:36:35.539128 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:36:35.549144 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:36:35.557858 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:36:35.568503 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:36:35.609587 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:36:35.615984 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:36:35.623708 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 7 01:36:35.635635 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:36:35.643511 jq[1535]: false Mar 7 01:36:35.646921 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:36:35.654789 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:36:35.668589 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:36:35.690139 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:36:35.690650 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing passwd entry cache Mar 7 01:36:35.691308 oslogin_cache_refresh[1537]: Refreshing passwd entry cache Mar 7 01:36:35.693381 extend-filesystems[1536]: Found /dev/vda6 Mar 7 01:36:35.703389 extend-filesystems[1536]: Found /dev/vda9 Mar 7 01:36:35.712102 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:35.721131 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:36:35.721755 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting users, quitting Mar 7 01:36:35.721755 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 7 01:36:35.721716 oslogin_cache_refresh[1537]: Failure getting users, quitting Mar 7 01:36:35.721903 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing group entry cache Mar 7 01:36:35.721740 oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 7 01:36:35.721812 oslogin_cache_refresh[1537]: Refreshing group entry cache Mar 7 01:36:35.723998 extend-filesystems[1536]: Checking size of /dev/vda9 Mar 7 01:36:35.730762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:36:35.735941 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:36:35.744155 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting groups, quitting Mar 7 01:36:35.744155 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 7 01:36:35.744106 oslogin_cache_refresh[1537]: Failure getting groups, quitting Mar 7 01:36:35.744130 oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 7 01:36:35.746982 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:36:35.769597 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:36:35.787422 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:36:35.788694 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:36:35.790523 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 7 01:36:35.790974 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 7 01:36:35.799007 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:36:35.799897 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:36:35.815096 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:36:36.079839 extend-filesystems[1536]: Resized partition /dev/vda9 Mar 7 01:36:35.816459 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:36:36.091610 jq[1557]: true Mar 7 01:36:36.092013 update_engine[1554]: I20260307 01:36:36.087007 1554 main.cc:92] Flatcar Update Engine starting Mar 7 01:36:36.108979 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Mar 7 01:36:36.121017 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:36:36.153357 jq[1570]: true Mar 7 01:36:36.156994 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:36:36.185867 systemd-networkd[1458]: eth0: Gained IPv6LL Mar 7 01:36:36.206301 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:36:36.354047 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:36:36.362334 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:36:36.417663 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:36:36.410523 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:36:36.459847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:36:36.516021 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:36:36.586740 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:36:36.586740 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:36:36.586740 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:36:36.602574 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:36:36.615827 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:36:36.616024 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Mar 7 01:36:36.620138 dbus-daemon[1533]: [system] SELinux support is enabled Mar 7 01:36:36.627836 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:36:36.631935 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:36:36.636342 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:36:36.637794 update_engine[1554]: I20260307 01:36:36.637669 1554 update_check_scheduler.cc:74] Next update check in 8m57s Mar 7 01:36:37.118555 systemd-logind[1546]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 01:36:37.118597 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:36:37.118937 systemd-logind[1546]: New seat seat0. Mar 7 01:36:37.321866 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:36:37.500850 kernel: kvm_amd: TSC scaling supported Mar 7 01:36:37.501057 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:36:37.501087 kernel: kvm_amd: Nested Paging enabled Mar 7 01:36:37.501106 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:36:37.501124 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:36:37.791551 kernel: hrtimer: interrupt took 9686230 ns Mar 7 01:36:38.037939 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:36:39.356378 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:36:39.368349 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:36:39.397962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:39.412845 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:36:39.460536 tar[1564]: linux-amd64/LICENSE Mar 7 01:36:39.460536 tar[1564]: linux-amd64/helm Mar 7 01:36:39.468761 dbus-daemon[1533]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:36:39.498624 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:36:39.594405 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:49856.service - OpenSSH per-connection server daemon (10.0.0.1:49856). Mar 7 01:36:39.606976 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:36:39.607139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:36:39.607329 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:36:39.615627 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:36:39.615658 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:36:39.632787 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:36:39.633887 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:36:39.644475 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:36:39.644899 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:36:39.659503 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:36:39.676944 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:36:39.800857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:36:39.897974 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:36:40.227888 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:36:40.336829 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:36:40.497397 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:36:40.516619 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:36:40.657896 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:36:40.702736 containerd[1571]: time="2026-03-07T01:36:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 7 01:36:40.706005 containerd[1571]: time="2026-03-07T01:36:40.705808058Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 7 01:36:40.806114 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 49856 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:40.853497 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:41.014456 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:36:41.026510 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:36:41.045915 containerd[1571]: time="2026-03-07T01:36:41.041552335Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=5.472925ms Mar 7 01:36:41.048016 containerd[1571]: time="2026-03-07T01:36:41.047687176Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 7 01:36:41.051820 containerd[1571]: time="2026-03-07T01:36:41.051718119Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 7 01:36:41.052600 containerd[1571]: time="2026-03-07T01:36:41.052576893Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 7 01:36:41.052954 containerd[1571]: time="2026-03-07T01:36:41.052932406Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 7 01:36:41.053157 containerd[1571]: time="2026-03-07T01:36:41.053140454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 01:36:41.053835 containerd[1571]: time="2026-03-07T01:36:41.053456795Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 01:36:41.057382 containerd[1571]: time="2026-03-07T01:36:41.056935568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 01:36:41.059521 containerd[1571]: time="2026-03-07T01:36:41.058387859Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 01:36:41.059521 containerd[1571]: time="2026-03-07T01:36:41.058407876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 01:36:41.064610 containerd[1571]: time="2026-03-07T01:36:41.062699055Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 01:36:41.064610 containerd[1571]: time="2026-03-07T01:36:41.062906473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 7 01:36:41.064610 containerd[1571]: time="2026-03-07T01:36:41.063438035Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 7 01:36:41.064610 containerd[1571]: time="2026-03-07T01:36:41.064234962Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 01:36:41.064610 containerd[1571]: time="2026-03-07T01:36:41.064409548Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 01:36:41.064610 containerd[1571]: time="2026-03-07T01:36:41.064422512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 7 01:36:41.065067 containerd[1571]: time="2026-03-07T01:36:41.064843438Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 7 01:36:41.067624 containerd[1571]: time="2026-03-07T01:36:41.067568004Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 7 01:36:41.067967 containerd[1571]: time="2026-03-07T01:36:41.067948894Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:36:41.099045 systemd-logind[1546]: New session 1 of user core. Mar 7 01:36:41.108620 containerd[1571]: time="2026-03-07T01:36:41.107862915Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 7 01:36:41.108844 containerd[1571]: time="2026-03-07T01:36:41.108822076Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 7 01:36:41.113612 containerd[1571]: time="2026-03-07T01:36:41.113122863Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 7 01:36:41.114245 containerd[1571]: time="2026-03-07T01:36:41.113989190Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 7 01:36:41.114409 containerd[1571]: time="2026-03-07T01:36:41.114383106Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 7 01:36:41.115614 containerd[1571]: time="2026-03-07T01:36:41.115398210Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 7 01:36:41.116103 containerd[1571]: time="2026-03-07T01:36:41.116082428Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 7 01:36:41.116280 containerd[1571]: time="2026-03-07T01:36:41.116157868Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 7 01:36:41.118271 containerd[1571]: time="2026-03-07T01:36:41.116570459Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 7 01:36:41.118271 containerd[1571]: time="2026-03-07T01:36:41.116595275Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 7 01:36:41.118271 containerd[1571]: time="2026-03-07T01:36:41.116605925Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 7 01:36:41.118271 containerd[1571]: time="2026-03-07T01:36:41.116701984Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 7 01:36:41.118616 containerd[1571]: time="2026-03-07T01:36:41.118566373Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 7 01:36:41.119535 containerd[1571]: time="2026-03-07T01:36:41.119403587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 7 01:36:41.125857 containerd[1571]: time="2026-03-07T01:36:41.125541764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 7 01:36:41.126625 containerd[1571]: time="2026-03-07T01:36:41.126595410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 7 01:36:41.126718 containerd[1571]: time="2026-03-07T01:36:41.126695727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 7 01:36:41.126792 containerd[1571]: time="2026-03-07T01:36:41.126773563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 7 01:36:41.127096 containerd[1571]: time="2026-03-07T01:36:41.127069274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 7 01:36:41.127362 containerd[1571]: time="2026-03-07T01:36:41.127335461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 7 01:36:41.127447 containerd[1571]: time="2026-03-07T01:36:41.127428305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 7 01:36:41.127521 containerd[1571]: time="2026-03-07T01:36:41.127501170Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 7 01:36:41.127659 containerd[1571]: time="2026-03-07T01:36:41.127643577Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 7 01:36:41.128070 containerd[1571]: time="2026-03-07T01:36:41.128052550Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 7 01:36:41.128134 containerd[1571]: time="2026-03-07T01:36:41.128121990Z" level=info msg="Start snapshots syncer" Mar 7 01:36:41.130601 containerd[1571]: time="2026-03-07T01:36:41.128451274Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 7 01:36:41.186981 containerd[1571]: time="2026-03-07T01:36:41.185476080Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 7 01:36:41.186981 containerd[1571]: time="2026-03-07T01:36:41.186138886Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.186698471Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187438903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187534682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187555341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187575408Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187597329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187619410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187636372Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187770402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187799677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187820165Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.187982448Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.188008387Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 01:36:41.188247 containerd[1571]: time="2026-03-07T01:36:41.188028283Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 01:36:41.188738 containerd[1571]: time="2026-03-07T01:36:41.188051387Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 01:36:41.188738 containerd[1571]: time="2026-03-07T01:36:41.188062818Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 7 01:36:41.188738 containerd[1571]: time="2026-03-07T01:36:41.188083006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 7 01:36:41.188738 containerd[1571]: time="2026-03-07T01:36:41.188267921Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 7 01:36:41.190837 containerd[1571]: time="2026-03-07T01:36:41.189374346Z" level=info msg="runtime interface created" Mar 7 01:36:41.190837 containerd[1571]: time="2026-03-07T01:36:41.189400144Z" level=info msg="created NRI interface" Mar 7 01:36:41.190837 containerd[1571]: time="2026-03-07T01:36:41.189420081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 7 01:36:41.190837 containerd[1571]: time="2026-03-07T01:36:41.190003981Z" level=info msg="Connect containerd service" Mar 7 01:36:41.190837 containerd[1571]: time="2026-03-07T01:36:41.190257154Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:36:41.194276 containerd[1571]: time="2026-03-07T01:36:41.194110779Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:36:41.252872 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:36:41.266640 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:36:41.555773 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:36:41.565881 systemd-logind[1546]: New session c1 of user core. Mar 7 01:36:42.524507 systemd[1665]: Queued start job for default target default.target. Mar 7 01:36:42.587611 systemd[1665]: Created slice app.slice - User Application Slice. Mar 7 01:36:42.587698 systemd[1665]: Reached target paths.target - Paths. Mar 7 01:36:42.588465 systemd[1665]: Reached target timers.target - Timers. Mar 7 01:36:42.604958 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:36:42.635990 containerd[1571]: time="2026-03-07T01:36:42.635694081Z" level=info msg="Start subscribing containerd event" Mar 7 01:36:42.645401 containerd[1571]: time="2026-03-07T01:36:42.636138150Z" level=info msg="Start recovering state" Mar 7 01:36:42.647286 containerd[1571]: time="2026-03-07T01:36:42.642699255Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:36:42.647286 containerd[1571]: time="2026-03-07T01:36:42.647079891Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:36:42.655224 containerd[1571]: time="2026-03-07T01:36:42.654413771Z" level=info msg="Start event monitor" Mar 7 01:36:42.655224 containerd[1571]: time="2026-03-07T01:36:42.654775497Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:36:42.655224 containerd[1571]: time="2026-03-07T01:36:42.654873961Z" level=info msg="Start streaming server" Mar 7 01:36:42.655224 containerd[1571]: time="2026-03-07T01:36:42.654961193Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 7 01:36:42.655224 containerd[1571]: time="2026-03-07T01:36:42.655015765Z" level=info msg="runtime interface starting up..." Mar 7 01:36:42.655224 containerd[1571]: time="2026-03-07T01:36:42.655047886Z" level=info msg="starting plugins..." Mar 7 01:36:42.660007 containerd[1571]: time="2026-03-07T01:36:42.659749761Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 7 01:36:42.667683 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:36:42.688828 containerd[1571]: time="2026-03-07T01:36:42.687540381Z" level=info msg="containerd successfully booted in 1.985129s" Mar 7 01:36:42.712488 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:36:42.712773 systemd[1665]: Reached target sockets.target - Sockets. Mar 7 01:36:42.712841 systemd[1665]: Reached target basic.target - Basic System. Mar 7 01:36:42.712898 systemd[1665]: Reached target default.target - Main User Target. Mar 7 01:36:42.712945 systemd[1665]: Startup finished in 1.106s. Mar 7 01:36:42.718958 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:36:42.840957 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:36:42.898839 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:43058.service - OpenSSH per-connection server daemon (10.0.0.1:43058). Mar 7 01:36:43.334376 tar[1564]: linux-amd64/README.md Mar 7 01:36:43.517954 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:36:43.541828 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 43058 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:43.545519 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:43.558391 systemd-logind[1546]: New session 2 of user core. Mar 7 01:36:43.592779 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:36:43.847727 sshd[1694]: Connection closed by 10.0.0.1 port 43058 Mar 7 01:36:43.851868 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:43.866567 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:43058.service: Deactivated successfully. Mar 7 01:36:43.870774 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:36:43.881440 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:36:43.883510 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:43070.service - OpenSSH per-connection server daemon (10.0.0.1:43070). Mar 7 01:36:43.896084 systemd-logind[1546]: Removed session 2. Mar 7 01:36:43.963814 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 43070 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:43.970481 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:44.016863 systemd-logind[1546]: New session 3 of user core. Mar 7 01:36:44.029548 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:36:44.128478 sshd[1703]: Connection closed by 10.0.0.1 port 43070 Mar 7 01:36:44.130065 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:44.141584 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:43070.service: Deactivated successfully. Mar 7 01:36:44.146893 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:36:44.149726 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:36:44.152787 systemd-logind[1546]: Removed session 3. Mar 7 01:36:45.928274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:36:45.929680 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:36:45.930661 systemd[1]: Startup finished in 4.411s (kernel) + 17.830s (initrd) + 16.893s (userspace) = 39.135s. Mar 7 01:36:45.958645 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:36:47.909976 kubelet[1717]: E0307 01:36:47.909327 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:36:47.916813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:36:47.917243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:36:47.918430 systemd[1]: kubelet.service: Consumed 6.054s CPU time, 258.4M memory peak. Mar 7 01:36:54.155975 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:36332.service - OpenSSH per-connection server daemon (10.0.0.1:36332). Mar 7 01:36:54.248918 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 36332 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:54.251465 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:54.262244 systemd-logind[1546]: New session 4 of user core. Mar 7 01:36:54.273577 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:36:54.305493 sshd[1730]: Connection closed by 10.0.0.1 port 36332 Mar 7 01:36:54.305816 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:54.327359 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:36332.service: Deactivated successfully. Mar 7 01:36:54.330470 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:36:54.331869 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:36:54.335931 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:36348.service - OpenSSH per-connection server daemon (10.0.0.1:36348). Mar 7 01:36:54.340059 systemd-logind[1546]: Removed session 4. Mar 7 01:36:54.427953 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 36348 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:54.430830 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:54.439932 systemd-logind[1546]: New session 5 of user core. Mar 7 01:36:54.451746 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:36:54.471663 sshd[1739]: Connection closed by 10.0.0.1 port 36348 Mar 7 01:36:54.472023 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:54.493025 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:36348.service: Deactivated successfully. Mar 7 01:36:54.496884 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:36:54.498966 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:36:54.503573 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:36350.service - OpenSSH per-connection server daemon (10.0.0.1:36350). Mar 7 01:36:54.505495 systemd-logind[1546]: Removed session 5. Mar 7 01:36:54.599581 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 36350 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:54.602118 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:54.616278 systemd-logind[1546]: New session 6 of user core. Mar 7 01:36:54.626693 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:36:54.651029 sshd[1748]: Connection closed by 10.0.0.1 port 36350 Mar 7 01:36:54.651967 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:54.664962 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:36350.service: Deactivated successfully. Mar 7 01:36:54.667746 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:36:54.669771 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:36:54.673598 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:36354.service - OpenSSH per-connection server daemon (10.0.0.1:36354). Mar 7 01:36:54.675890 systemd-logind[1546]: Removed session 6. Mar 7 01:36:54.754966 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:54.757002 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:54.766998 systemd-logind[1546]: New session 7 of user core. Mar 7 01:36:54.781589 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:36:54.812805 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:36:54.813365 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:36:54.844300 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 7 01:36:54.847811 sshd[1757]: Connection closed by 10.0.0.1 port 36354 Mar 7 01:36:54.848268 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:54.870057 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:36354.service: Deactivated successfully. Mar 7 01:36:54.875310 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:36:54.877535 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:36:54.881942 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:36368.service - OpenSSH per-connection server daemon (10.0.0.1:36368). Mar 7 01:36:54.883959 systemd-logind[1546]: Removed session 7. Mar 7 01:36:54.965356 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 36368 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:54.968034 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:54.978867 systemd-logind[1546]: New session 8 of user core. Mar 7 01:36:54.999580 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:36:55.023565 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:36:55.024067 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:36:55.041902 sudo[1769]: pam_unix(sudo:session): session closed for user root Mar 7 01:36:55.053929 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 7 01:36:55.054553 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:36:55.085131 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 01:36:55.165830 augenrules[1791]: No rules Mar 7 01:36:55.168698 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:36:55.169240 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 01:36:55.171609 sudo[1768]: pam_unix(sudo:session): session closed for user root Mar 7 01:36:55.175914 sshd[1767]: Connection closed by 10.0.0.1 port 36368 Mar 7 01:36:55.176889 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Mar 7 01:36:55.189648 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:36368.service: Deactivated successfully. Mar 7 01:36:55.192561 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:36:55.194093 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:36:55.197958 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:36372.service - OpenSSH per-connection server daemon (10.0.0.1:36372). Mar 7 01:36:55.200079 systemd-logind[1546]: Removed session 8. Mar 7 01:36:55.267280 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 36372 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:36:55.269590 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:55.277851 systemd-logind[1546]: New session 9 of user core. Mar 7 01:36:55.285548 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:36:55.305379 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:36:55.305757 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:36:58.131527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:36:58.135304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:36:58.915925 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:36:58.954782 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:37:00.315799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:00.332863 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:00.652903 kubelet[1838]: E0307 01:37:00.652740 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:00.661337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:00.661704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:00.662545 systemd[1]: kubelet.service: Consumed 2.111s CPU time, 110.7M memory peak. Mar 7 01:37:01.814689 dockerd[1827]: time="2026-03-07T01:37:01.813135213Z" level=info msg="Starting up" Mar 7 01:37:01.831883 dockerd[1827]: time="2026-03-07T01:37:01.830403023Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 7 01:37:02.004713 dockerd[1827]: time="2026-03-07T01:37:02.004120250Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 7 01:37:02.280110 systemd[1]: var-lib-docker-metacopy\x2dcheck3196443950-merged.mount: Deactivated successfully. Mar 7 01:37:02.317710 dockerd[1827]: time="2026-03-07T01:37:02.317516924Z" level=info msg="Loading containers: start." Mar 7 01:37:02.343805 kernel: Initializing XFRM netlink socket Mar 7 01:37:03.157623 systemd-networkd[1458]: docker0: Link UP Mar 7 01:37:03.166501 dockerd[1827]: time="2026-03-07T01:37:03.166335270Z" level=info msg="Loading containers: done." Mar 7 01:37:03.238329 dockerd[1827]: time="2026-03-07T01:37:03.238106638Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:37:03.238625 dockerd[1827]: time="2026-03-07T01:37:03.238576946Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 7 01:37:03.238922 dockerd[1827]: time="2026-03-07T01:37:03.238844445Z" level=info msg="Initializing buildkit" Mar 7 01:37:03.349102 dockerd[1827]: time="2026-03-07T01:37:03.348895486Z" level=info msg="Completed buildkit initialization" Mar 7 01:37:03.366685 dockerd[1827]: time="2026-03-07T01:37:03.366416625Z" level=info msg="Daemon has completed initialization" Mar 7 01:37:03.367365 dockerd[1827]: time="2026-03-07T01:37:03.366816215Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:37:03.368133 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:37:05.178947 containerd[1571]: time="2026-03-07T01:37:05.178749910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:37:05.903989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890903464.mount: Deactivated successfully. Mar 7 01:37:07.084386 containerd[1571]: time="2026-03-07T01:37:07.084273778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:07.085256 containerd[1571]: time="2026-03-07T01:37:07.085139624Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 7 01:37:07.088697 containerd[1571]: time="2026-03-07T01:37:07.088600224Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:07.093430 containerd[1571]: time="2026-03-07T01:37:07.093292472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:07.095069 containerd[1571]: time="2026-03-07T01:37:07.094906960Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.916034671s" Mar 7 01:37:07.095069 containerd[1571]: time="2026-03-07T01:37:07.094973745Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:37:07.098089 containerd[1571]: time="2026-03-07T01:37:07.097887844Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:37:08.769993 containerd[1571]: time="2026-03-07T01:37:08.768821074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:08.774080 containerd[1571]: time="2026-03-07T01:37:08.773784836Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 7 01:37:08.775401 containerd[1571]: time="2026-03-07T01:37:08.775263731Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:08.783870 containerd[1571]: time="2026-03-07T01:37:08.783639107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:08.786107 containerd[1571]: time="2026-03-07T01:37:08.785793051Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.687853891s" Mar 7 01:37:08.786107 containerd[1571]: time="2026-03-07T01:37:08.785864682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:37:08.787541 containerd[1571]: time="2026-03-07T01:37:08.786912674Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:37:09.842852 containerd[1571]: time="2026-03-07T01:37:09.842718227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:09.844143 containerd[1571]: time="2026-03-07T01:37:09.844080021Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 7 01:37:09.845918 containerd[1571]: time="2026-03-07T01:37:09.845861928Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:09.849558 containerd[1571]: time="2026-03-07T01:37:09.849505577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:09.851059 containerd[1571]: time="2026-03-07T01:37:09.850972541Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.064021386s" Mar 7 01:37:09.851059 containerd[1571]: time="2026-03-07T01:37:09.851038691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:37:09.851878 containerd[1571]: time="2026-03-07T01:37:09.851693717Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:37:10.878512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:37:10.882468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:11.031006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124998310.mount: Deactivated successfully. Mar 7 01:37:11.212104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:11.229585 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:11.326471 kubelet[2141]: E0307 01:37:11.325679 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:11.331093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:11.331463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:11.332055 systemd[1]: kubelet.service: Consumed 364ms CPU time, 110.4M memory peak. Mar 7 01:37:11.529577 containerd[1571]: time="2026-03-07T01:37:11.529319623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:11.531754 containerd[1571]: time="2026-03-07T01:37:11.531433692Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 7 01:37:11.533569 containerd[1571]: time="2026-03-07T01:37:11.533486449Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:11.538157 containerd[1571]: time="2026-03-07T01:37:11.538066900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:11.539212 containerd[1571]: time="2026-03-07T01:37:11.539032710Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.687304631s" Mar 7 01:37:11.539212 containerd[1571]: time="2026-03-07T01:37:11.539112836Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:37:11.540223 containerd[1571]: time="2026-03-07T01:37:11.539878004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:37:12.037540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260186752.mount: Deactivated successfully. Mar 7 01:37:13.815280 containerd[1571]: time="2026-03-07T01:37:13.815119509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:13.820362 containerd[1571]: time="2026-03-07T01:37:13.820268711Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 7 01:37:13.828251 containerd[1571]: time="2026-03-07T01:37:13.827904674Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:13.844067 containerd[1571]: time="2026-03-07T01:37:13.843866148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:13.846155 containerd[1571]: time="2026-03-07T01:37:13.846061324Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.306141162s" Mar 7 01:37:13.846155 containerd[1571]: time="2026-03-07T01:37:13.846138105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:37:13.855591 containerd[1571]: time="2026-03-07T01:37:13.855438327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:37:14.665718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059591307.mount: Deactivated successfully. Mar 7 01:37:14.730556 containerd[1571]: time="2026-03-07T01:37:14.727371105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:14.738878 containerd[1571]: time="2026-03-07T01:37:14.738737461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:37:14.749453 containerd[1571]: time="2026-03-07T01:37:14.748953171Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:14.831065 containerd[1571]: time="2026-03-07T01:37:14.830775301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:14.902490 containerd[1571]: time="2026-03-07T01:37:14.900856186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.045328362s" Mar 7 01:37:14.902490 containerd[1571]: time="2026-03-07T01:37:14.901150617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:37:14.908003 containerd[1571]: time="2026-03-07T01:37:14.907474036Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:37:16.605710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471420513.mount: Deactivated successfully. Mar 7 01:37:21.426446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:37:21.456637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:21.812343 update_engine[1554]: I20260307 01:37:21.800770 1554 update_attempter.cc:509] Updating boot flags... Mar 7 01:37:24.391247 containerd[1571]: time="2026-03-07T01:37:24.389964054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:24.391247 containerd[1571]: time="2026-03-07T01:37:24.390261216Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 7 01:37:24.395801 containerd[1571]: time="2026-03-07T01:37:24.395495136Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:24.403837 containerd[1571]: time="2026-03-07T01:37:24.403727500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:24.407341 containerd[1571]: time="2026-03-07T01:37:24.406498066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 9.498821266s" Mar 7 01:37:24.407341 containerd[1571]: time="2026-03-07T01:37:24.406809141Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:37:24.561573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:24.592969 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:25.017441 kubelet[2300]: E0307 01:37:25.016809 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:25.024321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:25.024739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:25.025790 systemd[1]: kubelet.service: Consumed 2.646s CPU time, 109.5M memory peak. Mar 7 01:37:30.048095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:30.048475 systemd[1]: kubelet.service: Consumed 2.646s CPU time, 109.5M memory peak. Mar 7 01:37:30.056637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:30.166000 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-9.scope)... Mar 7 01:37:30.166463 systemd[1]: Reloading... Mar 7 01:37:30.427319 zram_generator::config[2376]: No configuration found. Mar 7 01:37:31.056970 systemd[1]: Reloading finished in 889 ms. Mar 7 01:37:31.183011 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:37:31.186376 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:37:31.186817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:31.186878 systemd[1]: kubelet.service: Consumed 412ms CPU time, 98.1M memory peak. Mar 7 01:37:31.189466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:31.583455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:31.596830 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:37:31.719288 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:37:31.720745 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:37:31.721075 kubelet[2423]: I0307 01:37:31.720977 2423 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:37:32.401831 kubelet[2423]: I0307 01:37:32.401662 2423 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:37:32.401831 kubelet[2423]: I0307 01:37:32.401739 2423 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:37:32.402972 kubelet[2423]: I0307 01:37:32.402043 2423 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:37:32.402972 kubelet[2423]: I0307 01:37:32.402064 2423 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:37:32.402972 kubelet[2423]: I0307 01:37:32.402542 2423 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:37:32.524906 kubelet[2423]: E0307 01:37:32.524738 2423 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:37:32.526012 kubelet[2423]: I0307 01:37:32.525891 2423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:37:32.538974 kubelet[2423]: I0307 01:37:32.538906 2423 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 01:37:32.547836 kubelet[2423]: I0307 01:37:32.547774 2423 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:37:32.550883 kubelet[2423]: I0307 01:37:32.550464 2423 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:37:32.550883 kubelet[2423]: I0307 01:37:32.550531 2423 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:37:32.550883 kubelet[2423]: I0307 01:37:32.550848 2423 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:37:32.550883 kubelet[2423]: I0307 01:37:32.550867 2423 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:37:32.552736 kubelet[2423]: I0307 01:37:32.551049 2423 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:37:32.555327 kubelet[2423]: I0307 01:37:32.555159 2423 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:37:32.556115 kubelet[2423]: I0307 01:37:32.555992 2423 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:37:32.556115 kubelet[2423]: I0307 01:37:32.556055 2423 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:37:32.556497 kubelet[2423]: I0307 01:37:32.556362 2423 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:37:32.556543 kubelet[2423]: I0307 01:37:32.556502 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:37:32.562456 kubelet[2423]: E0307 01:37:32.562374 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:37:32.562962 kubelet[2423]: E0307 01:37:32.562789 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:37:32.568492 kubelet[2423]: I0307 01:37:32.568096 2423 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 01:37:32.585925 kubelet[2423]: I0307 01:37:32.578787 2423 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:37:32.585925 kubelet[2423]: I0307 01:37:32.578839 2423 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:37:32.585925 kubelet[2423]: W0307 01:37:32.585761 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:37:32.597455 kubelet[2423]: I0307 01:37:32.597357 2423 server.go:1262] "Started kubelet" Mar 7 01:37:32.598033 kubelet[2423]: I0307 01:37:32.597866 2423 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:37:32.602004 kubelet[2423]: I0307 01:37:32.601917 2423 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:37:32.602004 kubelet[2423]: I0307 01:37:32.601992 2423 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:37:32.602529 kubelet[2423]: I0307 01:37:32.602463 2423 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:37:32.603003 kubelet[2423]: I0307 01:37:32.602900 2423 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:37:32.609841 kubelet[2423]: I0307 01:37:32.608084 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:37:32.609841 kubelet[2423]: I0307 01:37:32.608324 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:37:32.609841 kubelet[2423]: E0307 01:37:32.605826 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6b5d7d0410f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:37:32.597260537 +0000 UTC m=+0.981823159,LastTimestamp:2026-03-07 01:37:32.597260537 +0000 UTC m=+0.981823159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:37:32.609841 kubelet[2423]: I0307 01:37:32.608622 2423 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:37:32.609841 kubelet[2423]: I0307 01:37:32.608909 2423 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:37:32.609841 kubelet[2423]: E0307 01:37:32.608889 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:37:32.609841 kubelet[2423]: I0307 01:37:32.609297 2423 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:37:32.610269 kubelet[2423]: I0307 01:37:32.610089 2423 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:37:32.610417 kubelet[2423]: I0307 01:37:32.610354 2423 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:37:32.611657 kubelet[2423]: E0307 01:37:32.611573 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:37:32.612619 kubelet[2423]: E0307 01:37:32.612561 2423 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:37:32.613411 kubelet[2423]: E0307 01:37:32.613344 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Mar 7 01:37:32.613975 kubelet[2423]: I0307 01:37:32.613907 2423 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:37:32.681684 kubelet[2423]: I0307 01:37:32.681033 2423 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:37:32.686825 kubelet[2423]: I0307 01:37:32.686697 2423 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:37:32.686825 kubelet[2423]: I0307 01:37:32.686828 2423 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:37:32.686984 kubelet[2423]: I0307 01:37:32.686918 2423 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:37:32.689703 kubelet[2423]: E0307 01:37:32.687062 2423 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:37:32.694925 kubelet[2423]: E0307 01:37:32.693807 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:37:32.704974 kubelet[2423]: I0307 01:37:32.704752 2423 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:37:32.704974 kubelet[2423]: I0307 01:37:32.704889 2423 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:37:32.705423 kubelet[2423]: I0307 01:37:32.705018 2423 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:37:32.710345 kubelet[2423]: E0307 01:37:32.710315 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:37:32.712591 kubelet[2423]: I0307 01:37:32.712509 2423 policy_none.go:49] "None policy: Start" Mar 7 01:37:32.712591 kubelet[2423]: I0307 01:37:32.712599 2423 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:37:32.712591 kubelet[2423]: I0307 01:37:32.712618 2423 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:37:32.716493 kubelet[2423]: I0307 01:37:32.716232 2423 policy_none.go:47] "Start" Mar 7 01:37:32.725845 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:37:32.772109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:37:32.793348 kubelet[2423]: E0307 01:37:32.793242 2423 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:37:32.806882 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:37:32.810844 kubelet[2423]: E0307 01:37:32.810710 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:37:32.816801 kubelet[2423]: E0307 01:37:32.816574 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Mar 7 01:37:32.841625 kubelet[2423]: E0307 01:37:32.838004 2423 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:37:32.841625 kubelet[2423]: I0307 01:37:32.838778 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:37:32.841625 kubelet[2423]: I0307 01:37:32.838823 2423 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:37:32.841625 kubelet[2423]: I0307 01:37:32.841361 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:37:32.850616 kubelet[2423]: E0307 01:37:32.850531 2423 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:37:32.850861 kubelet[2423]: E0307 01:37:32.850808 2423 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:37:32.943633 kubelet[2423]: I0307 01:37:32.942310 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:32.943633 kubelet[2423]: E0307 01:37:32.943087 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Mar 7 01:37:33.105054 systemd[1]: Created slice kubepods-burstable-pod22e8b1608acaf529188fa1b65671a979.slice - libcontainer container kubepods-burstable-pod22e8b1608acaf529188fa1b65671a979.slice. Mar 7 01:37:33.117290 kubelet[2423]: I0307 01:37:33.115445 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:33.117290 kubelet[2423]: I0307 01:37:33.115518 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22e8b1608acaf529188fa1b65671a979-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22e8b1608acaf529188fa1b65671a979\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:33.117290 kubelet[2423]: I0307 01:37:33.115604 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22e8b1608acaf529188fa1b65671a979-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22e8b1608acaf529188fa1b65671a979\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:33.117290 kubelet[2423]: I0307 01:37:33.115681 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:33.117290 kubelet[2423]: I0307 01:37:33.115712 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22e8b1608acaf529188fa1b65671a979-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22e8b1608acaf529188fa1b65671a979\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:33.120731 kubelet[2423]: I0307 01:37:33.115769 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:33.120731 kubelet[2423]: I0307 01:37:33.115792 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:33.120731 kubelet[2423]: I0307 01:37:33.115814 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:33.120731 kubelet[2423]: I0307 01:37:33.115924 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:33.129630 kubelet[2423]: E0307 01:37:33.128612 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:33.143867 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 7 01:37:33.149276 kubelet[2423]: I0307 01:37:33.148378 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:33.151446 kubelet[2423]: E0307 01:37:33.151302 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Mar 7 01:37:33.155596 kubelet[2423]: E0307 01:37:33.155330 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:33.163312 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 7 01:37:33.167237 kubelet[2423]: E0307 01:37:33.167056 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:33.220008 kubelet[2423]: E0307 01:37:33.217373 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Mar 7 01:37:33.460988 kubelet[2423]: E0307 01:37:33.460856 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:37:33.496856 kubelet[2423]: E0307 01:37:33.494594 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:33.506663 containerd[1571]: time="2026-03-07T01:37:33.505778049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22e8b1608acaf529188fa1b65671a979,Namespace:kube-system,Attempt:0,}" Mar 7 01:37:33.511884 kubelet[2423]: E0307 01:37:33.511842 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:33.513359 containerd[1571]: time="2026-03-07T01:37:33.512891632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 7 01:37:33.516545 kubelet[2423]: E0307 01:37:33.516231 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:33.517644 containerd[1571]: time="2026-03-07T01:37:33.517349556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 7 01:37:33.518293 kubelet[2423]: E0307 01:37:33.518063 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:37:33.561280 kubelet[2423]: I0307 01:37:33.561124 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:33.561806 kubelet[2423]: E0307 01:37:33.561719 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Mar 7 01:37:33.829445 kubelet[2423]: E0307 01:37:33.829095 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:37:34.030693 kubelet[2423]: E0307 01:37:34.030550 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Mar 7 01:37:34.037427 kubelet[2423]: E0307 01:37:34.037329 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:37:34.371996 kubelet[2423]: I0307 01:37:34.368867 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:34.371996 kubelet[2423]: E0307 01:37:34.370494 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Mar 7 01:37:34.458832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772764841.mount: Deactivated successfully. Mar 7 01:37:34.515230 containerd[1571]: time="2026-03-07T01:37:34.514732203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:37:34.518901 containerd[1571]: time="2026-03-07T01:37:34.518291529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:37:34.529727 containerd[1571]: time="2026-03-07T01:37:34.529557422Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:37:34.540900 containerd[1571]: time="2026-03-07T01:37:34.538965838Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:37:34.543932 containerd[1571]: time="2026-03-07T01:37:34.542789055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:37:34.552526 containerd[1571]: time="2026-03-07T01:37:34.552405220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 01:37:34.555405 containerd[1571]: time="2026-03-07T01:37:34.555313021Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 01:37:34.558673 containerd[1571]: time="2026-03-07T01:37:34.558532590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:37:34.560468 containerd[1571]: time="2026-03-07T01:37:34.560301454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.04061322s" Mar 7 01:37:34.576487 containerd[1571]: time="2026-03-07T01:37:34.575477382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.049602794s" Mar 7 01:37:34.591394 containerd[1571]: time="2026-03-07T01:37:34.590370213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.061817126s" Mar 7 01:37:34.615855 kubelet[2423]: E0307 01:37:34.615603 2423 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:37:34.713308 containerd[1571]: time="2026-03-07T01:37:34.713220422Z" level=info msg="connecting to shim 60752f8cf0f8f11e21484274ae0da8e12a571e5c18665ab144ee5828cd5a0163" address="unix:///run/containerd/s/94d6c6a5e6e79604f301a7feba46aff03bd1a60e36d4f5476c4b89c370d579ad" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:37:34.722545 containerd[1571]: time="2026-03-07T01:37:34.722482952Z" level=info msg="connecting to shim 5c69cc34b899e0b67a0989bdeaa1a63ae70714b41935f2a701fdcee3ed81e6fc" address="unix:///run/containerd/s/8c402318ef62f8de907c611655c7534939cd2673b19eaca765fbf6b034866dd8" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:37:34.750094 containerd[1571]: time="2026-03-07T01:37:34.750022973Z" level=info msg="connecting to shim 3abef672d1d49a89edf0128df5c0b98e987b93aa55ef544e6ad5b69188cf6b1e" address="unix:///run/containerd/s/62e723f4f3fe232905dda0d9a4539003d4acb864b980211f15c8c66a9b3444f9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:37:35.150363 systemd[1]: Started cri-containerd-5c69cc34b899e0b67a0989bdeaa1a63ae70714b41935f2a701fdcee3ed81e6fc.scope - libcontainer container 5c69cc34b899e0b67a0989bdeaa1a63ae70714b41935f2a701fdcee3ed81e6fc. Mar 7 01:37:35.187575 systemd[1]: Started cri-containerd-60752f8cf0f8f11e21484274ae0da8e12a571e5c18665ab144ee5828cd5a0163.scope - libcontainer container 60752f8cf0f8f11e21484274ae0da8e12a571e5c18665ab144ee5828cd5a0163. Mar 7 01:37:35.204865 systemd[1]: Started cri-containerd-3abef672d1d49a89edf0128df5c0b98e987b93aa55ef544e6ad5b69188cf6b1e.scope - libcontainer container 3abef672d1d49a89edf0128df5c0b98e987b93aa55ef544e6ad5b69188cf6b1e. Mar 7 01:37:35.635488 kubelet[2423]: E0307 01:37:35.634749 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="3.2s" Mar 7 01:37:35.713445 kubelet[2423]: E0307 01:37:35.713396 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:37:35.724633 kubelet[2423]: E0307 01:37:35.724545 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:37:35.739732 containerd[1571]: time="2026-03-07T01:37:35.739673582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22e8b1608acaf529188fa1b65671a979,Namespace:kube-system,Attempt:0,} returns sandbox id \"3abef672d1d49a89edf0128df5c0b98e987b93aa55ef544e6ad5b69188cf6b1e\"" Mar 7 01:37:35.740720 containerd[1571]: time="2026-03-07T01:37:35.739679915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c69cc34b899e0b67a0989bdeaa1a63ae70714b41935f2a701fdcee3ed81e6fc\"" Mar 7 01:37:35.745951 kubelet[2423]: E0307 01:37:35.745904 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:35.746763 kubelet[2423]: E0307 01:37:35.746378 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:35.780044 containerd[1571]: time="2026-03-07T01:37:35.779953335Z" level=info msg="CreateContainer within sandbox \"3abef672d1d49a89edf0128df5c0b98e987b93aa55ef544e6ad5b69188cf6b1e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:37:35.790089 containerd[1571]: time="2026-03-07T01:37:35.789284152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"60752f8cf0f8f11e21484274ae0da8e12a571e5c18665ab144ee5828cd5a0163\"" Mar 7 01:37:35.790089 containerd[1571]: time="2026-03-07T01:37:35.789537282Z" level=info msg="CreateContainer within sandbox \"5c69cc34b899e0b67a0989bdeaa1a63ae70714b41935f2a701fdcee3ed81e6fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:37:35.790598 kubelet[2423]: E0307 01:37:35.790531 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:35.802827 containerd[1571]: time="2026-03-07T01:37:35.802785019Z" level=info msg="CreateContainer within sandbox \"60752f8cf0f8f11e21484274ae0da8e12a571e5c18665ab144ee5828cd5a0163\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:37:35.810932 containerd[1571]: time="2026-03-07T01:37:35.810805733Z" level=info msg="Container 58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:37:35.826770 containerd[1571]: time="2026-03-07T01:37:35.826680073Z" level=info msg="CreateContainer within sandbox \"3abef672d1d49a89edf0128df5c0b98e987b93aa55ef544e6ad5b69188cf6b1e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf\"" Mar 7 01:37:35.828422 containerd[1571]: time="2026-03-07T01:37:35.828373310Z" level=info msg="StartContainer for \"58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf\"" Mar 7 01:37:35.830759 containerd[1571]: time="2026-03-07T01:37:35.830683496Z" level=info msg="connecting to shim 58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf" address="unix:///run/containerd/s/62e723f4f3fe232905dda0d9a4539003d4acb864b980211f15c8c66a9b3444f9" protocol=ttrpc version=3 Mar 7 01:37:35.836493 containerd[1571]: time="2026-03-07T01:37:35.836279105Z" level=info msg="Container a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:37:35.845282 containerd[1571]: time="2026-03-07T01:37:35.843504985Z" level=info msg="Container 29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:37:35.844425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425018237.mount: Deactivated successfully. Mar 7 01:37:36.138251 kubelet[2423]: I0307 01:37:36.137980 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:36.141783 containerd[1571]: time="2026-03-07T01:37:36.141645443Z" level=info msg="CreateContainer within sandbox \"5c69cc34b899e0b67a0989bdeaa1a63ae70714b41935f2a701fdcee3ed81e6fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e\"" Mar 7 01:37:36.142682 kubelet[2423]: E0307 01:37:36.142607 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:37:36.143722 kubelet[2423]: E0307 01:37:36.143076 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Mar 7 01:37:36.144940 containerd[1571]: time="2026-03-07T01:37:36.144768885Z" level=info msg="CreateContainer within sandbox \"60752f8cf0f8f11e21484274ae0da8e12a571e5c18665ab144ee5828cd5a0163\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16\"" Mar 7 01:37:36.241484 systemd[1]: Started cri-containerd-58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf.scope - libcontainer container 58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf. Mar 7 01:37:36.363639 containerd[1571]: time="2026-03-07T01:37:36.363567717Z" level=info msg="StartContainer for \"a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e\"" Mar 7 01:37:36.369886 containerd[1571]: time="2026-03-07T01:37:36.369848477Z" level=info msg="StartContainer for \"29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16\"" Mar 7 01:37:36.406306 containerd[1571]: time="2026-03-07T01:37:36.398701747Z" level=info msg="connecting to shim a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e" address="unix:///run/containerd/s/8c402318ef62f8de907c611655c7534939cd2673b19eaca765fbf6b034866dd8" protocol=ttrpc version=3 Mar 7 01:37:36.406306 containerd[1571]: time="2026-03-07T01:37:36.403291342Z" level=info msg="connecting to shim 29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16" address="unix:///run/containerd/s/94d6c6a5e6e79604f301a7feba46aff03bd1a60e36d4f5476c4b89c370d579ad" protocol=ttrpc version=3 Mar 7 01:37:36.508617 systemd[1]: Started cri-containerd-29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16.scope - libcontainer container 29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16. Mar 7 01:37:36.538832 systemd[1]: Started cri-containerd-a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e.scope - libcontainer container a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e. Mar 7 01:37:36.606611 kubelet[2423]: E0307 01:37:36.606458 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:37:36.922258 containerd[1571]: time="2026-03-07T01:37:36.920018192Z" level=info msg="StartContainer for \"58d5a42c0399e9a8c5db172a04e7165c639399545626423cb9af8888d77e18bf\" returns successfully" Mar 7 01:37:36.932696 kubelet[2423]: E0307 01:37:36.932628 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:36.933059 kubelet[2423]: E0307 01:37:36.932863 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:37.009684 containerd[1571]: time="2026-03-07T01:37:37.009048026Z" level=info msg="StartContainer for \"29c348bd96f7cbe08f8fcc50594c6ea86a8887ee8145fb6c4ec2f41d65599e16\" returns successfully" Mar 7 01:37:37.030333 containerd[1571]: time="2026-03-07T01:37:37.030109834Z" level=info msg="StartContainer for \"a32d8eac1f6848d4247d4d41e4022f92801213e2c864eb5759b9cac3d441899e\" returns successfully" Mar 7 01:37:37.946231 kubelet[2423]: E0307 01:37:37.945902 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:37.946937 kubelet[2423]: E0307 01:37:37.946261 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:37.959664 kubelet[2423]: E0307 01:37:37.959617 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:37.959826 kubelet[2423]: E0307 01:37:37.959798 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:37.963684 kubelet[2423]: E0307 01:37:37.963315 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:37.963684 kubelet[2423]: E0307 01:37:37.963574 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:38.963898 kubelet[2423]: E0307 01:37:38.962702 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:38.963898 kubelet[2423]: E0307 01:37:38.963798 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:38.966541 kubelet[2423]: E0307 01:37:38.966304 2423 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:37:38.966541 kubelet[2423]: E0307 01:37:38.966463 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:39.348907 kubelet[2423]: I0307 01:37:39.348433 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:39.453961 kubelet[2423]: E0307 01:37:39.453847 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:37:39.522122 kubelet[2423]: I0307 01:37:39.520673 2423 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:37:39.522122 kubelet[2423]: E0307 01:37:39.520729 2423 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:37:39.534253 kubelet[2423]: E0307 01:37:39.534043 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:37:39.562364 kubelet[2423]: E0307 01:37:39.562253 2423 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6b5d7d0410f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:37:32.597260537 +0000 UTC m=+0.981823159,LastTimestamp:2026-03-07 01:37:32.597260537 +0000 UTC m=+0.981823159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:37:39.634328 kubelet[2423]: E0307 01:37:39.634262 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:37:39.713384 kubelet[2423]: I0307 01:37:39.713280 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:39.726087 kubelet[2423]: E0307 01:37:39.726005 2423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:39.726087 kubelet[2423]: I0307 01:37:39.726059 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:39.728506 kubelet[2423]: E0307 01:37:39.728481 2423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:39.728810 kubelet[2423]: I0307 01:37:39.728603 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:39.730576 kubelet[2423]: E0307 01:37:39.730544 2423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:39.963302 kubelet[2423]: I0307 01:37:39.963113 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:39.965672 kubelet[2423]: E0307 01:37:39.965608 2423 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:39.965991 kubelet[2423]: E0307 01:37:39.965806 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:40.499575 kubelet[2423]: I0307 01:37:40.499502 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:40.506561 kubelet[2423]: E0307 01:37:40.506487 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:40.613681 kubelet[2423]: I0307 01:37:40.613474 2423 apiserver.go:52] "Watching apiserver" Mar 7 01:37:40.709484 kubelet[2423]: I0307 01:37:40.709338 2423 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:37:40.967474 kubelet[2423]: E0307 01:37:40.967347 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:42.096317 systemd[1]: Reload requested from client PID 2716 ('systemctl') (unit session-9.scope)... Mar 7 01:37:42.096362 systemd[1]: Reloading... Mar 7 01:37:42.185444 zram_generator::config[2756]: No configuration found. Mar 7 01:37:42.200407 kubelet[2423]: I0307 01:37:42.200350 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:42.209460 kubelet[2423]: E0307 01:37:42.209382 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:42.445805 kubelet[2423]: I0307 01:37:42.445620 2423 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:42.452846 kubelet[2423]: E0307 01:37:42.452549 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:42.463507 systemd[1]: Reloading finished in 366 ms. Mar 7 01:37:42.509703 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:42.510071 kubelet[2423]: I0307 01:37:42.510026 2423 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:37:42.527262 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:37:42.527634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:42.527705 systemd[1]: kubelet.service: Consumed 2.521s CPU time, 125.1M memory peak. Mar 7 01:37:42.530574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:42.776068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:42.794880 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:37:42.861104 kubelet[2804]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:37:42.861104 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:37:42.861530 kubelet[2804]: I0307 01:37:42.861101 2804 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:37:42.873735 kubelet[2804]: I0307 01:37:42.873667 2804 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:37:42.873735 kubelet[2804]: I0307 01:37:42.873715 2804 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:37:42.873735 kubelet[2804]: I0307 01:37:42.873742 2804 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:37:42.873890 kubelet[2804]: I0307 01:37:42.873752 2804 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:37:42.873978 kubelet[2804]: I0307 01:37:42.873938 2804 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:37:42.875448 kubelet[2804]: I0307 01:37:42.875413 2804 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:37:42.878790 kubelet[2804]: I0307 01:37:42.878736 2804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:37:42.885396 kubelet[2804]: I0307 01:37:42.885337 2804 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 01:37:42.890845 kubelet[2804]: I0307 01:37:42.890786 2804 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:37:42.891106 kubelet[2804]: I0307 01:37:42.891047 2804 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:37:42.891319 kubelet[2804]: I0307 01:37:42.891085 2804 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:37:42.891319 kubelet[2804]: I0307 01:37:42.891307 2804 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:37:42.891319 kubelet[2804]: I0307 01:37:42.891315 2804 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:37:42.891533 kubelet[2804]: I0307 01:37:42.891342 2804 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:37:42.891533 kubelet[2804]: I0307 01:37:42.891486 2804 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:37:42.891848 kubelet[2804]: I0307 01:37:42.891681 2804 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:37:42.891848 kubelet[2804]: I0307 01:37:42.891714 2804 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:37:42.891848 kubelet[2804]: I0307 01:37:42.891733 2804 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:37:42.891848 kubelet[2804]: I0307 01:37:42.891747 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:37:42.893072 kubelet[2804]: I0307 01:37:42.893017 2804 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 01:37:42.894854 kubelet[2804]: I0307 01:37:42.894793 2804 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:37:42.894854 kubelet[2804]: I0307 01:37:42.894840 2804 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:37:42.901999 kubelet[2804]: I0307 01:37:42.901898 2804 server.go:1262] "Started kubelet" Mar 7 01:37:42.902230 kubelet[2804]: I0307 01:37:42.902092 2804 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:37:42.902370 kubelet[2804]: I0307 01:37:42.902302 2804 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:37:42.902370 kubelet[2804]: I0307 01:37:42.902362 2804 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:37:42.902613 kubelet[2804]: I0307 01:37:42.902560 2804 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:37:42.904745 kubelet[2804]: I0307 01:37:42.904043 2804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:37:42.904745 kubelet[2804]: I0307 01:37:42.904110 2804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:37:42.906717 kubelet[2804]: I0307 01:37:42.906575 2804 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:37:42.911214 kubelet[2804]: I0307 01:37:42.910908 2804 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:37:42.911214 kubelet[2804]: I0307 01:37:42.911067 2804 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:37:42.911372 kubelet[2804]: I0307 01:37:42.911307 2804 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:37:42.916428 kubelet[2804]: I0307 01:37:42.916314 2804 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:37:42.916672 kubelet[2804]: I0307 01:37:42.916482 2804 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:37:42.919279 kubelet[2804]: E0307 01:37:42.919261 2804 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:37:42.919701 kubelet[2804]: I0307 01:37:42.919639 2804 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:37:42.941460 kubelet[2804]: I0307 01:37:42.941278 2804 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:37:42.943741 kubelet[2804]: I0307 01:37:42.943378 2804 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:37:42.943741 kubelet[2804]: I0307 01:37:42.943422 2804 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:37:42.943741 kubelet[2804]: I0307 01:37:42.943447 2804 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:37:42.943741 kubelet[2804]: E0307 01:37:42.943499 2804 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:37:42.978885 kubelet[2804]: I0307 01:37:42.978807 2804 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:37:42.978885 kubelet[2804]: I0307 01:37:42.978849 2804 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:37:42.978885 kubelet[2804]: I0307 01:37:42.978872 2804 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:37:42.979323 kubelet[2804]: I0307 01:37:42.979057 2804 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:37:42.979323 kubelet[2804]: I0307 01:37:42.979072 2804 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:37:42.979323 kubelet[2804]: I0307 01:37:42.979095 2804 policy_none.go:49] "None policy: Start" Mar 7 01:37:42.979323 kubelet[2804]: I0307 01:37:42.979224 2804 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:37:42.979323 kubelet[2804]: I0307 01:37:42.979270 2804 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:37:42.979529 kubelet[2804]: I0307 01:37:42.979402 2804 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:37:42.979529 kubelet[2804]: I0307 01:37:42.979445 2804 policy_none.go:47] "Start" Mar 7 01:37:42.988473 kubelet[2804]: E0307 01:37:42.988408 2804 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:37:42.988693 kubelet[2804]: I0307 01:37:42.988650 2804 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:37:42.988756 kubelet[2804]: I0307 01:37:42.988697 2804 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:37:42.990303 kubelet[2804]: I0307 01:37:42.990258 2804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:37:42.991307 kubelet[2804]: E0307 01:37:42.991090 2804 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:37:43.045413 kubelet[2804]: I0307 01:37:43.045253 2804 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:43.045557 kubelet[2804]: I0307 01:37:43.045253 2804 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:43.046005 kubelet[2804]: I0307 01:37:43.045634 2804 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.057635 kubelet[2804]: E0307 01:37:43.057457 2804 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.058551 kubelet[2804]: E0307 01:37:43.058449 2804 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:43.059551 kubelet[2804]: E0307 01:37:43.059455 2804 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:43.103306 sudo[2844]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:37:43.103842 sudo[2844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:37:43.111786 kubelet[2804]: I0307 01:37:43.111734 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.111853 kubelet[2804]: I0307 01:37:43.111807 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:43.111853 kubelet[2804]: I0307 01:37:43.111839 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22e8b1608acaf529188fa1b65671a979-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22e8b1608acaf529188fa1b65671a979\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:43.111906 kubelet[2804]: I0307 01:37:43.111866 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22e8b1608acaf529188fa1b65671a979-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22e8b1608acaf529188fa1b65671a979\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:43.111906 kubelet[2804]: I0307 01:37:43.111892 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22e8b1608acaf529188fa1b65671a979-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22e8b1608acaf529188fa1b65671a979\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:37:43.111955 kubelet[2804]: I0307 01:37:43.111914 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.111955 kubelet[2804]: I0307 01:37:43.111937 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.112019 kubelet[2804]: I0307 01:37:43.111961 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.112019 kubelet[2804]: I0307 01:37:43.111982 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:37:43.113086 kubelet[2804]: I0307 01:37:43.112997 2804 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:37:43.127000 kubelet[2804]: I0307 01:37:43.126949 2804 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:37:43.127238 kubelet[2804]: I0307 01:37:43.127032 2804 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:37:43.359698 kubelet[2804]: E0307 01:37:43.358518 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:43.360571 kubelet[2804]: E0307 01:37:43.360485 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:43.360845 kubelet[2804]: E0307 01:37:43.360664 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:43.559479 sudo[2844]: pam_unix(sudo:session): session closed for user root Mar 7 01:37:43.892929 kubelet[2804]: I0307 01:37:43.892882 2804 apiserver.go:52] "Watching apiserver" Mar 7 01:37:43.912223 kubelet[2804]: I0307 01:37:43.912092 2804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:37:43.969286 kubelet[2804]: I0307 01:37:43.968916 2804 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:43.969585 kubelet[2804]: E0307 01:37:43.969564 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:43.970474 kubelet[2804]: E0307 01:37:43.969693 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:43.980462 kubelet[2804]: E0307 01:37:43.980361 2804 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:37:43.981574 kubelet[2804]: E0307 01:37:43.981538 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:43.995847 kubelet[2804]: I0307 01:37:43.995769 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.995715954 podStartE2EDuration="1.995715954s" podCreationTimestamp="2026-03-07 01:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:37:43.995075466 +0000 UTC m=+1.193981478" watchObservedRunningTime="2026-03-07 01:37:43.995715954 +0000 UTC m=+1.194621985" Mar 7 01:37:44.017571 kubelet[2804]: I0307 01:37:44.017483 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.017471117 podStartE2EDuration="2.017471117s" podCreationTimestamp="2026-03-07 01:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:37:44.005966053 +0000 UTC m=+1.204872075" watchObservedRunningTime="2026-03-07 01:37:44.017471117 +0000 UTC m=+1.216377129" Mar 7 01:37:44.017571 kubelet[2804]: I0307 01:37:44.017568 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.017564651 podStartE2EDuration="4.017564651s" podCreationTimestamp="2026-03-07 01:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:37:44.017278508 +0000 UTC m=+1.216184520" watchObservedRunningTime="2026-03-07 01:37:44.017564651 +0000 UTC m=+1.216470683" Mar 7 01:37:44.971920 kubelet[2804]: E0307 01:37:44.971769 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:44.974388 kubelet[2804]: E0307 01:37:44.973720 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:45.151999 sudo[1804]: pam_unix(sudo:session): session closed for user root Mar 7 01:37:45.155138 sshd[1803]: Connection closed by 10.0.0.1 port 36372 Mar 7 01:37:45.156390 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Mar 7 01:37:45.161756 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:36372.service: Deactivated successfully. Mar 7 01:37:45.165043 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:37:45.165730 systemd[1]: session-9.scope: Consumed 11.819s CPU time, 275.7M memory peak. Mar 7 01:37:45.170028 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:37:45.172267 systemd-logind[1546]: Removed session 9. Mar 7 01:37:45.653957 kubelet[2804]: E0307 01:37:45.653849 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:47.495819 kubelet[2804]: I0307 01:37:47.495739 2804 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:37:47.496768 containerd[1571]: time="2026-03-07T01:37:47.496705901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:37:47.497265 kubelet[2804]: I0307 01:37:47.497126 2804 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:37:47.767749 kubelet[2804]: E0307 01:37:47.767548 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:47.978621 kubelet[2804]: E0307 01:37:47.978575 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:48.199568 systemd[1]: Created slice kubepods-besteffort-podb2463a5c_aad5_4424_b560_1ab212706798.slice - libcontainer container kubepods-besteffort-podb2463a5c_aad5_4424_b560_1ab212706798.slice. Mar 7 01:37:48.222507 systemd[1]: Created slice kubepods-burstable-pod12da5a98_f0c3_4605_b0f9_1d9b40d4db0a.slice - libcontainer container kubepods-burstable-pod12da5a98_f0c3_4605_b0f9_1d9b40d4db0a.slice. Mar 7 01:37:48.251474 kubelet[2804]: I0307 01:37:48.251372 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cni-path\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.251474 kubelet[2804]: I0307 01:37:48.251446 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hubble-tls\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.251474 kubelet[2804]: I0307 01:37:48.251474 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqz4\" (UniqueName: \"kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-kube-api-access-mpqz4\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.252841 kubelet[2804]: I0307 01:37:48.251499 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2463a5c-aad5-4424-b560-1ab212706798-kube-proxy\") pod \"kube-proxy-4bxbh\" (UID: \"b2463a5c-aad5-4424-b560-1ab212706798\") " pod="kube-system/kube-proxy-4bxbh" Mar 7 01:37:48.252841 kubelet[2804]: I0307 01:37:48.252342 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2463a5c-aad5-4424-b560-1ab212706798-xtables-lock\") pod \"kube-proxy-4bxbh\" (UID: \"b2463a5c-aad5-4424-b560-1ab212706798\") " pod="kube-system/kube-proxy-4bxbh" Mar 7 01:37:48.252841 kubelet[2804]: I0307 01:37:48.252384 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-run\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.252841 kubelet[2804]: I0307 01:37:48.252413 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-cgroup\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.252841 kubelet[2804]: I0307 01:37:48.252441 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-etc-cni-netd\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.252841 kubelet[2804]: I0307 01:37:48.252464 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-lib-modules\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253074 kubelet[2804]: I0307 01:37:48.252490 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-config-path\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253074 kubelet[2804]: I0307 01:37:48.252518 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hostproc\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253074 kubelet[2804]: I0307 01:37:48.252546 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-clustermesh-secrets\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253074 kubelet[2804]: I0307 01:37:48.252586 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-bpf-maps\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253074 kubelet[2804]: I0307 01:37:48.252615 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-xtables-lock\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253074 kubelet[2804]: I0307 01:37:48.252642 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-net\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253264 kubelet[2804]: I0307 01:37:48.252920 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-kernel\") pod \"cilium-nlqvd\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " pod="kube-system/cilium-nlqvd" Mar 7 01:37:48.253264 kubelet[2804]: I0307 01:37:48.253011 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2463a5c-aad5-4424-b560-1ab212706798-lib-modules\") pod \"kube-proxy-4bxbh\" (UID: \"b2463a5c-aad5-4424-b560-1ab212706798\") " pod="kube-system/kube-proxy-4bxbh" Mar 7 01:37:48.253264 kubelet[2804]: I0307 01:37:48.253038 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7j4s\" (UniqueName: \"kubernetes.io/projected/b2463a5c-aad5-4424-b560-1ab212706798-kube-api-access-b7j4s\") pod \"kube-proxy-4bxbh\" (UID: \"b2463a5c-aad5-4424-b560-1ab212706798\") " pod="kube-system/kube-proxy-4bxbh" Mar 7 01:37:48.522745 kubelet[2804]: E0307 01:37:48.522371 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:48.523585 containerd[1571]: time="2026-03-07T01:37:48.523514996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bxbh,Uid:b2463a5c-aad5-4424-b560-1ab212706798,Namespace:kube-system,Attempt:0,}" Mar 7 01:37:48.533261 kubelet[2804]: E0307 01:37:48.532679 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:48.533624 containerd[1571]: time="2026-03-07T01:37:48.533557683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nlqvd,Uid:12da5a98-f0c3-4605-b0f9-1d9b40d4db0a,Namespace:kube-system,Attempt:0,}" Mar 7 01:37:48.560442 containerd[1571]: time="2026-03-07T01:37:48.560303228Z" level=info msg="connecting to shim b2b3bad5ae92728c8135aa830291e56a804c7adcfe3d27f97749cd4c2c86d059" address="unix:///run/containerd/s/aefd10f32e79a366c6673201615939ccd7c5914ad5486921260cc8252dd152b9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:37:48.569122 containerd[1571]: time="2026-03-07T01:37:48.568997108Z" level=info msg="connecting to shim 6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2" address="unix:///run/containerd/s/9de4c1da475929a51682fe2c164459c17b95e1e405b57073c4845c40e9c03b9c" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:37:48.598476 systemd[1]: Started cri-containerd-b2b3bad5ae92728c8135aa830291e56a804c7adcfe3d27f97749cd4c2c86d059.scope - libcontainer container b2b3bad5ae92728c8135aa830291e56a804c7adcfe3d27f97749cd4c2c86d059. Mar 7 01:37:48.604057 systemd[1]: Started cri-containerd-6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2.scope - libcontainer container 6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2. Mar 7 01:37:48.652446 containerd[1571]: time="2026-03-07T01:37:48.651610148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bxbh,Uid:b2463a5c-aad5-4424-b560-1ab212706798,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2b3bad5ae92728c8135aa830291e56a804c7adcfe3d27f97749cd4c2c86d059\"" Mar 7 01:37:48.652790 kubelet[2804]: E0307 01:37:48.652649 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:48.660835 containerd[1571]: time="2026-03-07T01:37:48.660704473Z" level=info msg="CreateContainer within sandbox \"b2b3bad5ae92728c8135aa830291e56a804c7adcfe3d27f97749cd4c2c86d059\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:37:48.670493 containerd[1571]: time="2026-03-07T01:37:48.670427256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nlqvd,Uid:12da5a98-f0c3-4605-b0f9-1d9b40d4db0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\"" Mar 7 01:37:48.672868 kubelet[2804]: E0307 01:37:48.672461 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:48.680695 containerd[1571]: time="2026-03-07T01:37:48.676575774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:37:48.690337 containerd[1571]: time="2026-03-07T01:37:48.689728662Z" level=info msg="Container 7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:37:48.704080 containerd[1571]: time="2026-03-07T01:37:48.703960245Z" level=info msg="CreateContainer within sandbox \"b2b3bad5ae92728c8135aa830291e56a804c7adcfe3d27f97749cd4c2c86d059\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34\"" Mar 7 01:37:48.705599 containerd[1571]: time="2026-03-07T01:37:48.705489975Z" level=info msg="StartContainer for \"7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34\"" Mar 7 01:37:48.707455 containerd[1571]: time="2026-03-07T01:37:48.707288316Z" level=info msg="connecting to shim 7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34" address="unix:///run/containerd/s/aefd10f32e79a366c6673201615939ccd7c5914ad5486921260cc8252dd152b9" protocol=ttrpc version=3 Mar 7 01:37:48.753613 systemd[1]: Started cri-containerd-7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34.scope - libcontainer container 7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34. Mar 7 01:37:48.756692 kubelet[2804]: I0307 01:37:48.756671 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtsj\" (UniqueName: \"kubernetes.io/projected/9614f4bb-a2e8-4929-8fdd-d4491f470b55-kube-api-access-vrtsj\") pod \"cilium-operator-6f9c7c5859-vvtcq\" (UID: \"9614f4bb-a2e8-4929-8fdd-d4491f470b55\") " pod="kube-system/cilium-operator-6f9c7c5859-vvtcq" Mar 7 01:37:48.756924 kubelet[2804]: I0307 01:37:48.756849 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9614f4bb-a2e8-4929-8fdd-d4491f470b55-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-vvtcq\" (UID: \"9614f4bb-a2e8-4929-8fdd-d4491f470b55\") " pod="kube-system/cilium-operator-6f9c7c5859-vvtcq" Mar 7 01:37:48.761003 systemd[1]: Created slice kubepods-besteffort-pod9614f4bb_a2e8_4929_8fdd_d4491f470b55.slice - libcontainer container kubepods-besteffort-pod9614f4bb_a2e8_4929_8fdd_d4491f470b55.slice. Mar 7 01:37:48.863087 containerd[1571]: time="2026-03-07T01:37:48.862891673Z" level=info msg="StartContainer for \"7d50fbab2e1e39658378315d92eecbf10a54a49c8c4802c9c5b2e7b1e1d29a34\" returns successfully" Mar 7 01:37:48.985543 kubelet[2804]: E0307 01:37:48.985455 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:48.985543 kubelet[2804]: E0307 01:37:48.985455 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:49.069292 kubelet[2804]: E0307 01:37:49.068807 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:49.070862 containerd[1571]: time="2026-03-07T01:37:49.070629834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-vvtcq,Uid:9614f4bb-a2e8-4929-8fdd-d4491f470b55,Namespace:kube-system,Attempt:0,}" Mar 7 01:37:49.099376 containerd[1571]: time="2026-03-07T01:37:49.098911117Z" level=info msg="connecting to shim e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181" address="unix:///run/containerd/s/610b1b080898ec75395f21a77278b28b786d961c4f9fe13ce123456c65ee0dc9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:37:49.140606 systemd[1]: Started cri-containerd-e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181.scope - libcontainer container e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181. Mar 7 01:37:49.244504 containerd[1571]: time="2026-03-07T01:37:49.244399593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-vvtcq,Uid:9614f4bb-a2e8-4929-8fdd-d4491f470b55,Namespace:kube-system,Attempt:0,} returns sandbox id \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\"" Mar 7 01:37:49.245686 kubelet[2804]: E0307 01:37:49.245573 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:51.954888 kubelet[2804]: E0307 01:37:51.954820 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:51.976932 kubelet[2804]: I0307 01:37:51.976342 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bxbh" podStartSLOduration=3.976320731 podStartE2EDuration="3.976320731s" podCreationTimestamp="2026-03-07 01:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:37:49.002468661 +0000 UTC m=+6.201374672" watchObservedRunningTime="2026-03-07 01:37:51.976320731 +0000 UTC m=+9.175226773" Mar 7 01:37:51.997888 kubelet[2804]: E0307 01:37:51.997757 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:53.002567 kubelet[2804]: E0307 01:37:53.002530 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:56.318000 kubelet[2804]: E0307 01:37:56.317785 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:37:58.346098 kubelet[2804]: E0307 01:37:58.345060 2804 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.316s" Mar 7 01:38:01.727803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232557560.mount: Deactivated successfully. Mar 7 01:38:04.392417 containerd[1571]: time="2026-03-07T01:38:04.391955881Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:04.393618 containerd[1571]: time="2026-03-07T01:38:04.393544051Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:38:04.399031 containerd[1571]: time="2026-03-07T01:38:04.398888874Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:04.402794 containerd[1571]: time="2026-03-07T01:38:04.402711926Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.722070396s" Mar 7 01:38:04.402861 containerd[1571]: time="2026-03-07T01:38:04.402837830Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:38:04.404968 containerd[1571]: time="2026-03-07T01:38:04.404820139Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:38:04.411103 containerd[1571]: time="2026-03-07T01:38:04.411015365Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:38:04.434297 containerd[1571]: time="2026-03-07T01:38:04.433686987Z" level=info msg="Container 82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:04.449961 containerd[1571]: time="2026-03-07T01:38:04.449829930Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\"" Mar 7 01:38:04.452676 containerd[1571]: time="2026-03-07T01:38:04.450988151Z" level=info msg="StartContainer for \"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\"" Mar 7 01:38:04.452676 containerd[1571]: time="2026-03-07T01:38:04.452487578Z" level=info msg="connecting to shim 82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b" address="unix:///run/containerd/s/9de4c1da475929a51682fe2c164459c17b95e1e405b57073c4845c40e9c03b9c" protocol=ttrpc version=3 Mar 7 01:38:04.552590 systemd[1]: Started cri-containerd-82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b.scope - libcontainer container 82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b. Mar 7 01:38:04.634997 containerd[1571]: time="2026-03-07T01:38:04.634662829Z" level=info msg="StartContainer for \"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\" returns successfully" Mar 7 01:38:04.657112 systemd[1]: cri-containerd-82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b.scope: Deactivated successfully. Mar 7 01:38:04.666982 containerd[1571]: time="2026-03-07T01:38:04.666849018Z" level=info msg="received container exit event container_id:\"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\" id:\"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\" pid:3237 exited_at:{seconds:1772847484 nanos:665105024}" Mar 7 01:38:04.722144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b-rootfs.mount: Deactivated successfully. Mar 7 01:38:05.559750 kubelet[2804]: E0307 01:38:05.559720 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:05.579856 containerd[1571]: time="2026-03-07T01:38:05.578752414Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:38:05.607842 containerd[1571]: time="2026-03-07T01:38:05.607754253Z" level=info msg="Container 41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:05.614701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031723713.mount: Deactivated successfully. Mar 7 01:38:05.623121 containerd[1571]: time="2026-03-07T01:38:05.623042340Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\"" Mar 7 01:38:05.624821 containerd[1571]: time="2026-03-07T01:38:05.624694712Z" level=info msg="StartContainer for \"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\"" Mar 7 01:38:05.626689 containerd[1571]: time="2026-03-07T01:38:05.626625604Z" level=info msg="connecting to shim 41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d" address="unix:///run/containerd/s/9de4c1da475929a51682fe2c164459c17b95e1e405b57073c4845c40e9c03b9c" protocol=ttrpc version=3 Mar 7 01:38:05.686622 systemd[1]: Started cri-containerd-41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d.scope - libcontainer container 41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d. Mar 7 01:38:05.763983 containerd[1571]: time="2026-03-07T01:38:05.763915401Z" level=info msg="StartContainer for \"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\" returns successfully" Mar 7 01:38:05.804883 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:38:05.805087 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:38:05.805629 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:38:05.809464 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:38:05.814404 systemd[1]: cri-containerd-41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d.scope: Deactivated successfully. Mar 7 01:38:05.816748 containerd[1571]: time="2026-03-07T01:38:05.816151779Z" level=info msg="received container exit event container_id:\"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\" id:\"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\" pid:3293 exited_at:{seconds:1772847485 nanos:815688413}" Mar 7 01:38:05.851053 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:38:06.103627 containerd[1571]: time="2026-03-07T01:38:06.103410243Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:06.105927 containerd[1571]: time="2026-03-07T01:38:06.105506493Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:38:06.107520 containerd[1571]: time="2026-03-07T01:38:06.107315160Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:06.109105 containerd[1571]: time="2026-03-07T01:38:06.108932319Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.704029226s" Mar 7 01:38:06.109105 containerd[1571]: time="2026-03-07T01:38:06.109000996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:38:06.121487 containerd[1571]: time="2026-03-07T01:38:06.121382508Z" level=info msg="CreateContainer within sandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:38:06.139938 containerd[1571]: time="2026-03-07T01:38:06.139780550Z" level=info msg="Container cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:06.154294 containerd[1571]: time="2026-03-07T01:38:06.154086098Z" level=info msg="CreateContainer within sandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\"" Mar 7 01:38:06.156301 containerd[1571]: time="2026-03-07T01:38:06.155456730Z" level=info msg="StartContainer for \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\"" Mar 7 01:38:06.157005 containerd[1571]: time="2026-03-07T01:38:06.156959975Z" level=info msg="connecting to shim cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266" address="unix:///run/containerd/s/610b1b080898ec75395f21a77278b28b786d961c4f9fe13ce123456c65ee0dc9" protocol=ttrpc version=3 Mar 7 01:38:06.217645 systemd[1]: Started cri-containerd-cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266.scope - libcontainer container cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266. Mar 7 01:38:06.310585 containerd[1571]: time="2026-03-07T01:38:06.310489526Z" level=info msg="StartContainer for \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" returns successfully" Mar 7 01:38:06.434517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d-rootfs.mount: Deactivated successfully. Mar 7 01:38:06.568713 kubelet[2804]: E0307 01:38:06.568611 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:06.581206 containerd[1571]: time="2026-03-07T01:38:06.580999987Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:38:06.583287 kubelet[2804]: E0307 01:38:06.581854 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:06.642074 containerd[1571]: time="2026-03-07T01:38:06.641984703Z" level=info msg="Container 97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:06.646482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054756649.mount: Deactivated successfully. Mar 7 01:38:06.681462 containerd[1571]: time="2026-03-07T01:38:06.681298946Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\"" Mar 7 01:38:06.687022 containerd[1571]: time="2026-03-07T01:38:06.686415210Z" level=info msg="StartContainer for \"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\"" Mar 7 01:38:06.693821 containerd[1571]: time="2026-03-07T01:38:06.693771053Z" level=info msg="connecting to shim 97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d" address="unix:///run/containerd/s/9de4c1da475929a51682fe2c164459c17b95e1e405b57073c4845c40e9c03b9c" protocol=ttrpc version=3 Mar 7 01:38:06.766474 systemd[1]: Started cri-containerd-97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d.scope - libcontainer container 97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d. Mar 7 01:38:06.921838 containerd[1571]: time="2026-03-07T01:38:06.921747134Z" level=info msg="StartContainer for \"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\" returns successfully" Mar 7 01:38:06.940425 systemd[1]: cri-containerd-97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d.scope: Deactivated successfully. Mar 7 01:38:06.952982 containerd[1571]: time="2026-03-07T01:38:06.952878619Z" level=info msg="received container exit event container_id:\"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\" id:\"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\" pid:3383 exited_at:{seconds:1772847486 nanos:951004483}" Mar 7 01:38:07.435050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d-rootfs.mount: Deactivated successfully. Mar 7 01:38:07.591027 kubelet[2804]: E0307 01:38:07.590576 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:07.591027 kubelet[2804]: E0307 01:38:07.590752 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:07.599908 containerd[1571]: time="2026-03-07T01:38:07.599658287Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:38:07.634979 kubelet[2804]: I0307 01:38:07.633677 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-vvtcq" podStartSLOduration=2.768964001 podStartE2EDuration="19.633653843s" podCreationTimestamp="2026-03-07 01:37:48 +0000 UTC" firstStartedPulling="2026-03-07 01:37:49.246124485 +0000 UTC m=+6.445030498" lastFinishedPulling="2026-03-07 01:38:06.110814327 +0000 UTC m=+23.309720340" observedRunningTime="2026-03-07 01:38:06.731530144 +0000 UTC m=+23.930436177" watchObservedRunningTime="2026-03-07 01:38:07.633653843 +0000 UTC m=+24.832559865" Mar 7 01:38:07.645369 containerd[1571]: time="2026-03-07T01:38:07.645252725Z" level=info msg="Container 5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:07.689581 containerd[1571]: time="2026-03-07T01:38:07.688476197Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\"" Mar 7 01:38:07.691875 containerd[1571]: time="2026-03-07T01:38:07.691709990Z" level=info msg="StartContainer for \"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\"" Mar 7 01:38:07.695153 containerd[1571]: time="2026-03-07T01:38:07.695086830Z" level=info msg="connecting to shim 5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a" address="unix:///run/containerd/s/9de4c1da475929a51682fe2c164459c17b95e1e405b57073c4845c40e9c03b9c" protocol=ttrpc version=3 Mar 7 01:38:07.740657 systemd[1]: Started cri-containerd-5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a.scope - libcontainer container 5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a. Mar 7 01:38:07.816573 systemd[1]: cri-containerd-5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a.scope: Deactivated successfully. Mar 7 01:38:07.823674 containerd[1571]: time="2026-03-07T01:38:07.823592389Z" level=info msg="received container exit event container_id:\"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\" id:\"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\" pid:3422 exited_at:{seconds:1772847487 nanos:818806167}" Mar 7 01:38:07.827939 containerd[1571]: time="2026-03-07T01:38:07.827777154Z" level=info msg="StartContainer for \"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\" returns successfully" Mar 7 01:38:07.877844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a-rootfs.mount: Deactivated successfully. Mar 7 01:38:08.598951 kubelet[2804]: E0307 01:38:08.598834 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:08.608107 containerd[1571]: time="2026-03-07T01:38:08.608018239Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:38:08.633727 containerd[1571]: time="2026-03-07T01:38:08.633621647Z" level=info msg="Container aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:08.644490 containerd[1571]: time="2026-03-07T01:38:08.644406180Z" level=info msg="CreateContainer within sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\"" Mar 7 01:38:08.645438 containerd[1571]: time="2026-03-07T01:38:08.645410819Z" level=info msg="StartContainer for \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\"" Mar 7 01:38:08.647423 containerd[1571]: time="2026-03-07T01:38:08.647342043Z" level=info msg="connecting to shim aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12" address="unix:///run/containerd/s/9de4c1da475929a51682fe2c164459c17b95e1e405b57073c4845c40e9c03b9c" protocol=ttrpc version=3 Mar 7 01:38:08.697565 systemd[1]: Started cri-containerd-aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12.scope - libcontainer container aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12. Mar 7 01:38:08.794664 containerd[1571]: time="2026-03-07T01:38:08.794455292Z" level=info msg="StartContainer for \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" returns successfully" Mar 7 01:38:09.007598 kubelet[2804]: I0307 01:38:09.007416 2804 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:38:09.086891 systemd[1]: Created slice kubepods-burstable-pod59e208a9_ead9_4e56_99ea_6f1c49c8fb0e.slice - libcontainer container kubepods-burstable-pod59e208a9_ead9_4e56_99ea_6f1c49c8fb0e.slice. Mar 7 01:38:09.101195 systemd[1]: Created slice kubepods-burstable-pod69cff3cc_04a1_42b2_9975_b47f56ba2682.slice - libcontainer container kubepods-burstable-pod69cff3cc_04a1_42b2_9975_b47f56ba2682.slice. Mar 7 01:38:09.126915 kubelet[2804]: I0307 01:38:09.126822 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69cff3cc-04a1-42b2-9975-b47f56ba2682-config-volume\") pod \"coredns-66bc5c9577-96tsq\" (UID: \"69cff3cc-04a1-42b2-9975-b47f56ba2682\") " pod="kube-system/coredns-66bc5c9577-96tsq" Mar 7 01:38:09.126915 kubelet[2804]: I0307 01:38:09.126885 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgfm\" (UniqueName: \"kubernetes.io/projected/59e208a9-ead9-4e56-99ea-6f1c49c8fb0e-kube-api-access-jsgfm\") pod \"coredns-66bc5c9577-jkgz5\" (UID: \"59e208a9-ead9-4e56-99ea-6f1c49c8fb0e\") " pod="kube-system/coredns-66bc5c9577-jkgz5" Mar 7 01:38:09.126915 kubelet[2804]: I0307 01:38:09.126903 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhcz\" (UniqueName: \"kubernetes.io/projected/69cff3cc-04a1-42b2-9975-b47f56ba2682-kube-api-access-lfhcz\") pod \"coredns-66bc5c9577-96tsq\" (UID: \"69cff3cc-04a1-42b2-9975-b47f56ba2682\") " pod="kube-system/coredns-66bc5c9577-96tsq" Mar 7 01:38:09.126915 kubelet[2804]: I0307 01:38:09.126917 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59e208a9-ead9-4e56-99ea-6f1c49c8fb0e-config-volume\") pod \"coredns-66bc5c9577-jkgz5\" (UID: \"59e208a9-ead9-4e56-99ea-6f1c49c8fb0e\") " pod="kube-system/coredns-66bc5c9577-jkgz5" Mar 7 01:38:09.399925 kubelet[2804]: E0307 01:38:09.399825 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:09.401453 containerd[1571]: time="2026-03-07T01:38:09.401226709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jkgz5,Uid:59e208a9-ead9-4e56-99ea-6f1c49c8fb0e,Namespace:kube-system,Attempt:0,}" Mar 7 01:38:09.416339 kubelet[2804]: E0307 01:38:09.416253 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:09.418659 containerd[1571]: time="2026-03-07T01:38:09.418540004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-96tsq,Uid:69cff3cc-04a1-42b2-9975-b47f56ba2682,Namespace:kube-system,Attempt:0,}" Mar 7 01:38:09.612696 kubelet[2804]: E0307 01:38:09.612644 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:10.614762 kubelet[2804]: E0307 01:38:10.614568 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:11.458788 systemd-networkd[1458]: cilium_host: Link UP Mar 7 01:38:11.459072 systemd-networkd[1458]: cilium_net: Link UP Mar 7 01:38:11.459578 systemd-networkd[1458]: cilium_net: Gained carrier Mar 7 01:38:11.460569 systemd-networkd[1458]: cilium_host: Gained carrier Mar 7 01:38:11.616585 kubelet[2804]: E0307 01:38:11.616453 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:11.658816 systemd-networkd[1458]: cilium_vxlan: Link UP Mar 7 01:38:11.658856 systemd-networkd[1458]: cilium_vxlan: Gained carrier Mar 7 01:38:12.003276 kernel: NET: Registered PF_ALG protocol family Mar 7 01:38:12.182336 systemd-networkd[1458]: cilium_host: Gained IPv6LL Mar 7 01:38:12.242514 systemd-networkd[1458]: cilium_net: Gained IPv6LL Mar 7 01:38:13.153807 systemd-networkd[1458]: lxc_health: Link UP Mar 7 01:38:13.156124 systemd-networkd[1458]: lxc_health: Gained carrier Mar 7 01:38:13.331926 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL Mar 7 01:38:13.488864 systemd-networkd[1458]: lxc87dab3b216be: Link UP Mar 7 01:38:13.489264 kernel: eth0: renamed from tmp7ea2a Mar 7 01:38:13.493592 systemd-networkd[1458]: lxc87dab3b216be: Gained carrier Mar 7 01:38:13.528993 systemd-networkd[1458]: lxc657e2181e567: Link UP Mar 7 01:38:13.531610 kernel: eth0: renamed from tmp5504c Mar 7 01:38:13.538466 systemd-networkd[1458]: lxc657e2181e567: Gained carrier Mar 7 01:38:14.533702 kubelet[2804]: E0307 01:38:14.533571 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:14.590064 kubelet[2804]: I0307 01:38:14.589940 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nlqvd" podStartSLOduration=10.861029976 podStartE2EDuration="26.58992399s" podCreationTimestamp="2026-03-07 01:37:48 +0000 UTC" firstStartedPulling="2026-03-07 01:37:48.675561954 +0000 UTC m=+5.874467997" lastFinishedPulling="2026-03-07 01:38:04.404455989 +0000 UTC m=+21.603362011" observedRunningTime="2026-03-07 01:38:09.654422154 +0000 UTC m=+26.853328186" watchObservedRunningTime="2026-03-07 01:38:14.58992399 +0000 UTC m=+31.788830002" Mar 7 01:38:14.612465 systemd-networkd[1458]: lxc87dab3b216be: Gained IPv6LL Mar 7 01:38:14.651260 kubelet[2804]: E0307 01:38:14.650252 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:15.188845 systemd-networkd[1458]: lxc_health: Gained IPv6LL Mar 7 01:38:15.506651 systemd-networkd[1458]: lxc657e2181e567: Gained IPv6LL Mar 7 01:38:15.652541 kubelet[2804]: E0307 01:38:15.652470 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:18.911323 containerd[1571]: time="2026-03-07T01:38:18.910588579Z" level=info msg="connecting to shim 5504c3602657f8d0d05a2a4aff75d41c8227633bd164f01b3a90367678d149b3" address="unix:///run/containerd/s/f0f49ba0483ccc0143c69f8889952dc2ad075d5ce2f469eedd807f5db347aafd" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:38:18.914302 containerd[1571]: time="2026-03-07T01:38:18.914108758Z" level=info msg="connecting to shim 7ea2ade26c15d10f02a817fae5e4de1726ff64b1c0a8a04940bba104200ae378" address="unix:///run/containerd/s/f0aff9d4627ceb44a3edad100e36e1929ef1b5ac2e60c27f3afa53c048106e59" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:38:18.958927 systemd[1]: Started cri-containerd-5504c3602657f8d0d05a2a4aff75d41c8227633bd164f01b3a90367678d149b3.scope - libcontainer container 5504c3602657f8d0d05a2a4aff75d41c8227633bd164f01b3a90367678d149b3. Mar 7 01:38:18.990942 systemd[1]: Started cri-containerd-7ea2ade26c15d10f02a817fae5e4de1726ff64b1c0a8a04940bba104200ae378.scope - libcontainer container 7ea2ade26c15d10f02a817fae5e4de1726ff64b1c0a8a04940bba104200ae378. Mar 7 01:38:19.008132 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:38:19.028606 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:38:19.104085 containerd[1571]: time="2026-03-07T01:38:19.103933472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-96tsq,Uid:69cff3cc-04a1-42b2-9975-b47f56ba2682,Namespace:kube-system,Attempt:0,} returns sandbox id \"5504c3602657f8d0d05a2a4aff75d41c8227633bd164f01b3a90367678d149b3\"" Mar 7 01:38:19.105145 kubelet[2804]: E0307 01:38:19.105049 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:19.119479 containerd[1571]: time="2026-03-07T01:38:19.119081306Z" level=info msg="CreateContainer within sandbox \"5504c3602657f8d0d05a2a4aff75d41c8227633bd164f01b3a90367678d149b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:38:19.128597 containerd[1571]: time="2026-03-07T01:38:19.128491230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jkgz5,Uid:59e208a9-ead9-4e56-99ea-6f1c49c8fb0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ea2ade26c15d10f02a817fae5e4de1726ff64b1c0a8a04940bba104200ae378\"" Mar 7 01:38:19.130907 kubelet[2804]: E0307 01:38:19.130812 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:19.140718 containerd[1571]: time="2026-03-07T01:38:19.140584989Z" level=info msg="CreateContainer within sandbox \"7ea2ade26c15d10f02a817fae5e4de1726ff64b1c0a8a04940bba104200ae378\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:38:19.167787 containerd[1571]: time="2026-03-07T01:38:19.167631302Z" level=info msg="Container a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:19.170836 containerd[1571]: time="2026-03-07T01:38:19.170689737Z" level=info msg="Container 621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:38:19.187798 containerd[1571]: time="2026-03-07T01:38:19.187757883Z" level=info msg="CreateContainer within sandbox \"5504c3602657f8d0d05a2a4aff75d41c8227633bd164f01b3a90367678d149b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a\"" Mar 7 01:38:19.189536 containerd[1571]: time="2026-03-07T01:38:19.189344173Z" level=info msg="StartContainer for \"a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a\"" Mar 7 01:38:19.191833 containerd[1571]: time="2026-03-07T01:38:19.191451188Z" level=info msg="connecting to shim a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a" address="unix:///run/containerd/s/f0f49ba0483ccc0143c69f8889952dc2ad075d5ce2f469eedd807f5db347aafd" protocol=ttrpc version=3 Mar 7 01:38:19.197350 containerd[1571]: time="2026-03-07T01:38:19.197089487Z" level=info msg="CreateContainer within sandbox \"7ea2ade26c15d10f02a817fae5e4de1726ff64b1c0a8a04940bba104200ae378\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306\"" Mar 7 01:38:19.200760 containerd[1571]: time="2026-03-07T01:38:19.200700930Z" level=info msg="StartContainer for \"621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306\"" Mar 7 01:38:19.204009 containerd[1571]: time="2026-03-07T01:38:19.202792756Z" level=info msg="connecting to shim 621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306" address="unix:///run/containerd/s/f0aff9d4627ceb44a3edad100e36e1929ef1b5ac2e60c27f3afa53c048106e59" protocol=ttrpc version=3 Mar 7 01:38:19.220607 systemd[1]: Started cri-containerd-a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a.scope - libcontainer container a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a. Mar 7 01:38:19.246863 systemd[1]: Started cri-containerd-621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306.scope - libcontainer container 621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306. Mar 7 01:38:19.323521 containerd[1571]: time="2026-03-07T01:38:19.323438481Z" level=info msg="StartContainer for \"a66344fc1d0f42720f4800a50f49ecdfacf160be3c4f3656fef67b321e61a92a\" returns successfully" Mar 7 01:38:19.326706 containerd[1571]: time="2026-03-07T01:38:19.326470553Z" level=info msg="StartContainer for \"621354bc926ab4b7c9e87fda0d63992959b545eec1e1ea8e20136a66c1ebb306\" returns successfully" Mar 7 01:38:19.697522 kubelet[2804]: E0307 01:38:19.697441 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:19.702817 kubelet[2804]: E0307 01:38:19.702746 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:19.719191 kubelet[2804]: I0307 01:38:19.719062 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-96tsq" podStartSLOduration=31.719041999 podStartE2EDuration="31.719041999s" podCreationTimestamp="2026-03-07 01:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:38:19.718585553 +0000 UTC m=+36.917491575" watchObservedRunningTime="2026-03-07 01:38:19.719041999 +0000 UTC m=+36.917948011" Mar 7 01:38:19.783051 kubelet[2804]: I0307 01:38:19.782648 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jkgz5" podStartSLOduration=31.782630692 podStartE2EDuration="31.782630692s" podCreationTimestamp="2026-03-07 01:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:38:19.745355883 +0000 UTC m=+36.944261905" watchObservedRunningTime="2026-03-07 01:38:19.782630692 +0000 UTC m=+36.981536694" Mar 7 01:38:19.847581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870079894.mount: Deactivated successfully. Mar 7 01:38:20.706856 kubelet[2804]: E0307 01:38:20.706646 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:20.706856 kubelet[2804]: E0307 01:38:20.706766 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:21.708212 kubelet[2804]: E0307 01:38:21.708118 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:38:44.032632 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:40688.service - OpenSSH per-connection server daemon (10.0.0.1:40688). Mar 7 01:38:44.144007 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 40688 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:38:44.146045 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:38:44.159633 systemd-logind[1546]: New session 10 of user core. Mar 7 01:38:44.167504 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:38:44.314667 sshd[4150]: Connection closed by 10.0.0.1 port 40688 Mar 7 01:38:44.315082 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Mar 7 01:38:44.320663 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:40688.service: Deactivated successfully. Mar 7 01:38:44.323894 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:38:44.328305 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:38:44.330742 systemd-logind[1546]: Removed session 10. Mar 7 01:38:49.337719 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:45058.service - OpenSSH per-connection server daemon (10.0.0.1:45058). Mar 7 01:38:49.422731 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 45058 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:38:49.424824 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:38:49.431879 systemd-logind[1546]: New session 11 of user core. Mar 7 01:38:49.442627 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:38:49.563414 sshd[4172]: Connection closed by 10.0.0.1 port 45058 Mar 7 01:38:49.564487 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Mar 7 01:38:49.571595 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:45058.service: Deactivated successfully. Mar 7 01:38:49.574871 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:38:49.576130 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:38:49.578351 systemd-logind[1546]: Removed session 11. Mar 7 01:38:54.582684 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:45064.service - OpenSSH per-connection server daemon (10.0.0.1:45064). Mar 7 01:38:54.645243 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 45064 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:38:54.647612 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:38:54.657704 systemd-logind[1546]: New session 12 of user core. Mar 7 01:38:54.665528 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:38:54.785927 sshd[4189]: Connection closed by 10.0.0.1 port 45064 Mar 7 01:38:54.786452 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Mar 7 01:38:54.791486 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:45064.service: Deactivated successfully. Mar 7 01:38:54.794340 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:38:54.795760 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:38:54.798624 systemd-logind[1546]: Removed session 12. Mar 7 01:38:59.801076 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:35348.service - OpenSSH per-connection server daemon (10.0.0.1:35348). Mar 7 01:38:59.873742 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:38:59.876309 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:38:59.885400 systemd-logind[1546]: New session 13 of user core. Mar 7 01:38:59.895621 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:39:00.024972 sshd[4208]: Connection closed by 10.0.0.1 port 35348 Mar 7 01:39:00.026593 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:00.036735 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:35348.service: Deactivated successfully. Mar 7 01:39:00.039333 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:39:00.040785 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:39:00.045135 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:35354.service - OpenSSH per-connection server daemon (10.0.0.1:35354). Mar 7 01:39:00.047402 systemd-logind[1546]: Removed session 13. Mar 7 01:39:00.115154 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 35354 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:00.117361 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:00.125558 systemd-logind[1546]: New session 14 of user core. Mar 7 01:39:00.138475 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:39:00.296373 sshd[4226]: Connection closed by 10.0.0.1 port 35354 Mar 7 01:39:00.298530 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:00.313446 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:35354.service: Deactivated successfully. Mar 7 01:39:00.318882 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:39:00.324561 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:39:00.332892 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:35362.service - OpenSSH per-connection server daemon (10.0.0.1:35362). Mar 7 01:39:00.336765 systemd-logind[1546]: Removed session 14. Mar 7 01:39:00.397513 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 35362 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:00.399044 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:00.405345 systemd-logind[1546]: New session 15 of user core. Mar 7 01:39:00.421429 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:39:00.509377 sshd[4241]: Connection closed by 10.0.0.1 port 35362 Mar 7 01:39:00.510521 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:00.514903 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:35362.service: Deactivated successfully. Mar 7 01:39:00.517473 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:39:00.519476 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:39:00.520998 systemd-logind[1546]: Removed session 15. Mar 7 01:39:00.949889 kubelet[2804]: E0307 01:39:00.949476 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:04.944147 kubelet[2804]: E0307 01:39:04.944026 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:05.525627 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:35370.service - OpenSSH per-connection server daemon (10.0.0.1:35370). Mar 7 01:39:05.592514 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 35370 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:05.594736 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:05.601998 systemd-logind[1546]: New session 16 of user core. Mar 7 01:39:05.610486 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:39:05.703926 sshd[4257]: Connection closed by 10.0.0.1 port 35370 Mar 7 01:39:05.704477 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:05.710332 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:35370.service: Deactivated successfully. Mar 7 01:39:05.712848 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:39:05.714476 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:39:05.716442 systemd-logind[1546]: Removed session 16. Mar 7 01:39:10.718439 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:39268.service - OpenSSH per-connection server daemon (10.0.0.1:39268). Mar 7 01:39:10.789755 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 39268 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:10.791985 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:10.798917 systemd-logind[1546]: New session 17 of user core. Mar 7 01:39:10.808454 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:39:10.905785 sshd[4274]: Connection closed by 10.0.0.1 port 39268 Mar 7 01:39:10.906483 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:10.914995 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:39268.service: Deactivated successfully. Mar 7 01:39:10.917107 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:39:10.918331 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:39:10.921923 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:39282.service - OpenSSH per-connection server daemon (10.0.0.1:39282). Mar 7 01:39:10.923422 systemd-logind[1546]: Removed session 17. Mar 7 01:39:10.985337 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 39282 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:10.986859 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:10.993738 systemd-logind[1546]: New session 18 of user core. Mar 7 01:39:11.010567 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:39:11.358473 sshd[4291]: Connection closed by 10.0.0.1 port 39282 Mar 7 01:39:11.360713 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:11.370061 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:39282.service: Deactivated successfully. Mar 7 01:39:11.372663 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:39:11.374284 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:39:11.377615 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:39286.service - OpenSSH per-connection server daemon (10.0.0.1:39286). Mar 7 01:39:11.379613 systemd-logind[1546]: Removed session 18. Mar 7 01:39:11.464005 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 39286 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:11.465917 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:11.473735 systemd-logind[1546]: New session 19 of user core. Mar 7 01:39:11.486780 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:39:12.029221 sshd[4306]: Connection closed by 10.0.0.1 port 39286 Mar 7 01:39:12.031138 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:12.038079 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:39286.service: Deactivated successfully. Mar 7 01:39:12.041914 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:39:12.044320 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:39:12.050880 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:39298.service - OpenSSH per-connection server daemon (10.0.0.1:39298). Mar 7 01:39:12.052631 systemd-logind[1546]: Removed session 19. Mar 7 01:39:12.109941 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 39298 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:12.112126 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:12.119053 systemd-logind[1546]: New session 20 of user core. Mar 7 01:39:12.127402 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:39:12.363627 sshd[4328]: Connection closed by 10.0.0.1 port 39298 Mar 7 01:39:12.364387 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:12.375732 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:39298.service: Deactivated successfully. Mar 7 01:39:12.381084 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:39:12.382885 systemd-logind[1546]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:39:12.387131 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:39310.service - OpenSSH per-connection server daemon (10.0.0.1:39310). Mar 7 01:39:12.389535 systemd-logind[1546]: Removed session 20. Mar 7 01:39:12.462121 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 39310 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:12.464014 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:12.471678 systemd-logind[1546]: New session 21 of user core. Mar 7 01:39:12.489544 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:39:12.591444 sshd[4343]: Connection closed by 10.0.0.1 port 39310 Mar 7 01:39:12.591833 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:12.596874 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:39310.service: Deactivated successfully. Mar 7 01:39:12.599991 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:39:12.601816 systemd-logind[1546]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:39:12.603855 systemd-logind[1546]: Removed session 21. Mar 7 01:39:17.618803 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:39314.service - OpenSSH per-connection server daemon (10.0.0.1:39314). Mar 7 01:39:17.691143 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 39314 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:17.693242 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:17.699336 systemd-logind[1546]: New session 22 of user core. Mar 7 01:39:17.712523 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:39:17.818072 sshd[4362]: Connection closed by 10.0.0.1 port 39314 Mar 7 01:39:17.818539 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:17.824403 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:39314.service: Deactivated successfully. Mar 7 01:39:17.828031 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:39:17.831400 systemd-logind[1546]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:39:17.833358 systemd-logind[1546]: Removed session 22. Mar 7 01:39:18.945573 kubelet[2804]: E0307 01:39:18.945469 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:20.944564 kubelet[2804]: E0307 01:39:20.944478 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:22.840741 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:60750.service - OpenSSH per-connection server daemon (10.0.0.1:60750). Mar 7 01:39:22.912220 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 60750 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:22.914774 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:22.922346 systemd-logind[1546]: New session 23 of user core. Mar 7 01:39:22.932586 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:39:23.041752 sshd[4382]: Connection closed by 10.0.0.1 port 60750 Mar 7 01:39:23.042108 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:23.047764 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:60750.service: Deactivated successfully. Mar 7 01:39:23.050511 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:39:23.052636 systemd-logind[1546]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:39:23.055559 systemd-logind[1546]: Removed session 23. Mar 7 01:39:24.945412 kubelet[2804]: E0307 01:39:24.945330 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:25.944910 kubelet[2804]: E0307 01:39:25.944799 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:27.945289 kubelet[2804]: E0307 01:39:27.944601 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:28.055885 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Mar 7 01:39:28.131908 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:28.134129 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:28.140618 systemd-logind[1546]: New session 24 of user core. Mar 7 01:39:28.155414 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:39:28.252765 sshd[4398]: Connection closed by 10.0.0.1 port 60762 Mar 7 01:39:28.252934 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:28.265239 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:60762.service: Deactivated successfully. Mar 7 01:39:28.267771 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:39:28.269260 systemd-logind[1546]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:39:28.272964 systemd[1]: Started sshd@24-10.0.0.82:22-10.0.0.1:60772.service - OpenSSH per-connection server daemon (10.0.0.1:60772). Mar 7 01:39:28.274616 systemd-logind[1546]: Removed session 24. Mar 7 01:39:28.330804 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 60772 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:28.332648 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:28.339046 systemd-logind[1546]: New session 25 of user core. Mar 7 01:39:28.353487 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:39:29.722384 containerd[1571]: time="2026-03-07T01:39:29.722246484Z" level=info msg="StopContainer for \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" with timeout 30 (s)" Mar 7 01:39:29.726240 containerd[1571]: time="2026-03-07T01:39:29.726027691Z" level=info msg="Stop container \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" with signal terminated" Mar 7 01:39:29.762593 systemd[1]: cri-containerd-cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266.scope: Deactivated successfully. Mar 7 01:39:29.764600 containerd[1571]: time="2026-03-07T01:39:29.764560499Z" level=info msg="received container exit event container_id:\"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" id:\"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" pid:3346 exited_at:{seconds:1772847569 nanos:764063783}" Mar 7 01:39:29.798769 containerd[1571]: time="2026-03-07T01:39:29.798709054Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:39:29.803587 containerd[1571]: time="2026-03-07T01:39:29.803502363Z" level=info msg="StopContainer for \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" with timeout 2 (s)" Mar 7 01:39:29.803974 containerd[1571]: time="2026-03-07T01:39:29.803885325Z" level=info msg="Stop container \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" with signal terminated" Mar 7 01:39:29.819536 systemd-networkd[1458]: lxc_health: Link DOWN Mar 7 01:39:29.819548 systemd-networkd[1458]: lxc_health: Lost carrier Mar 7 01:39:29.837362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266-rootfs.mount: Deactivated successfully. Mar 7 01:39:29.848685 systemd[1]: cri-containerd-aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12.scope: Deactivated successfully. Mar 7 01:39:29.849232 systemd[1]: cri-containerd-aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12.scope: Consumed 10.376s CPU time, 128.1M memory peak, 212K read from disk, 13.3M written to disk. Mar 7 01:39:29.852753 containerd[1571]: time="2026-03-07T01:39:29.852613032Z" level=info msg="received container exit event container_id:\"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" id:\"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" pid:3460 exited_at:{seconds:1772847569 nanos:851896262}" Mar 7 01:39:29.866151 containerd[1571]: time="2026-03-07T01:39:29.865814764Z" level=info msg="StopContainer for \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" returns successfully" Mar 7 01:39:29.871784 containerd[1571]: time="2026-03-07T01:39:29.871658460Z" level=info msg="StopPodSandbox for \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\"" Mar 7 01:39:29.873793 containerd[1571]: time="2026-03-07T01:39:29.873585560Z" level=info msg="Container to stop \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:39:29.889501 systemd[1]: cri-containerd-e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181.scope: Deactivated successfully. Mar 7 01:39:29.893731 containerd[1571]: time="2026-03-07T01:39:29.893536528Z" level=info msg="received sandbox exit event container_id:\"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" id:\"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" exit_status:137 exited_at:{seconds:1772847569 nanos:892678525}" monitor_name=podsandbox Mar 7 01:39:29.904675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12-rootfs.mount: Deactivated successfully. Mar 7 01:39:29.922433 containerd[1571]: time="2026-03-07T01:39:29.922278647Z" level=info msg="StopContainer for \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" returns successfully" Mar 7 01:39:29.922789 containerd[1571]: time="2026-03-07T01:39:29.922770633Z" level=info msg="StopPodSandbox for \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\"" Mar 7 01:39:29.922946 containerd[1571]: time="2026-03-07T01:39:29.922929139Z" level=info msg="Container to stop \"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:39:29.923005 containerd[1571]: time="2026-03-07T01:39:29.922993629Z" level=info msg="Container to stop \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:39:29.923085 containerd[1571]: time="2026-03-07T01:39:29.923056155Z" level=info msg="Container to stop \"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:39:29.923346 containerd[1571]: time="2026-03-07T01:39:29.923292035Z" level=info msg="Container to stop \"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:39:29.923571 containerd[1571]: time="2026-03-07T01:39:29.923397852Z" level=info msg="Container to stop \"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:39:29.929818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181-rootfs.mount: Deactivated successfully. Mar 7 01:39:29.934581 systemd[1]: cri-containerd-6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2.scope: Deactivated successfully. Mar 7 01:39:29.940948 containerd[1571]: time="2026-03-07T01:39:29.940827645Z" level=info msg="shim disconnected" id=e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181 namespace=k8s.io Mar 7 01:39:29.940948 containerd[1571]: time="2026-03-07T01:39:29.940864404Z" level=warning msg="cleaning up after shim disconnected" id=e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181 namespace=k8s.io Mar 7 01:39:29.940948 containerd[1571]: time="2026-03-07T01:39:29.940876827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:39:29.943150 containerd[1571]: time="2026-03-07T01:39:29.943004392Z" level=info msg="received sandbox exit event container_id:\"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" id:\"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" exit_status:137 exited_at:{seconds:1772847569 nanos:935634736}" monitor_name=podsandbox Mar 7 01:39:29.990717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181-shm.mount: Deactivated successfully. Mar 7 01:39:29.993058 containerd[1571]: time="2026-03-07T01:39:29.992938830Z" level=info msg="received sandbox container exit event sandbox_id:\"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" exit_status:137 exited_at:{seconds:1772847569 nanos:892678525}" monitor_name=criService Mar 7 01:39:29.995943 containerd[1571]: time="2026-03-07T01:39:29.995844781Z" level=info msg="TearDown network for sandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" successfully" Mar 7 01:39:29.995943 containerd[1571]: time="2026-03-07T01:39:29.995871671Z" level=info msg="StopPodSandbox for \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" returns successfully" Mar 7 01:39:30.008708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2-rootfs.mount: Deactivated successfully. Mar 7 01:39:30.021663 containerd[1571]: time="2026-03-07T01:39:30.021401881Z" level=info msg="shim disconnected" id=6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2 namespace=k8s.io Mar 7 01:39:30.021663 containerd[1571]: time="2026-03-07T01:39:30.021446694Z" level=warning msg="cleaning up after shim disconnected" id=6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2 namespace=k8s.io Mar 7 01:39:30.021663 containerd[1571]: time="2026-03-07T01:39:30.021459247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:39:30.048024 containerd[1571]: time="2026-03-07T01:39:30.047841840Z" level=info msg="received sandbox container exit event sandbox_id:\"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" exit_status:137 exited_at:{seconds:1772847569 nanos:935634736}" monitor_name=criService Mar 7 01:39:30.049639 containerd[1571]: time="2026-03-07T01:39:30.048070737Z" level=info msg="TearDown network for sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" successfully" Mar 7 01:39:30.049639 containerd[1571]: time="2026-03-07T01:39:30.049536577Z" level=info msg="StopPodSandbox for \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" returns successfully" Mar 7 01:39:30.127393 kubelet[2804]: I0307 01:39:30.127150 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrtsj\" (UniqueName: \"kubernetes.io/projected/9614f4bb-a2e8-4929-8fdd-d4491f470b55-kube-api-access-vrtsj\") pod \"9614f4bb-a2e8-4929-8fdd-d4491f470b55\" (UID: \"9614f4bb-a2e8-4929-8fdd-d4491f470b55\") " Mar 7 01:39:30.127393 kubelet[2804]: I0307 01:39:30.127274 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9614f4bb-a2e8-4929-8fdd-d4491f470b55-cilium-config-path\") pod \"9614f4bb-a2e8-4929-8fdd-d4491f470b55\" (UID: \"9614f4bb-a2e8-4929-8fdd-d4491f470b55\") " Mar 7 01:39:30.130670 kubelet[2804]: I0307 01:39:30.130567 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9614f4bb-a2e8-4929-8fdd-d4491f470b55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9614f4bb-a2e8-4929-8fdd-d4491f470b55" (UID: "9614f4bb-a2e8-4929-8fdd-d4491f470b55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:39:30.134527 kubelet[2804]: I0307 01:39:30.134449 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9614f4bb-a2e8-4929-8fdd-d4491f470b55-kube-api-access-vrtsj" (OuterVolumeSpecName: "kube-api-access-vrtsj") pod "9614f4bb-a2e8-4929-8fdd-d4491f470b55" (UID: "9614f4bb-a2e8-4929-8fdd-d4491f470b55"). InnerVolumeSpecName "kube-api-access-vrtsj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:39:30.227976 kubelet[2804]: I0307 01:39:30.227888 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-bpf-maps\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.227976 kubelet[2804]: I0307 01:39:30.227944 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-net\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.227976 kubelet[2804]: I0307 01:39:30.227968 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpqz4\" (UniqueName: \"kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-kube-api-access-mpqz4\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.227976 kubelet[2804]: I0307 01:39:30.227981 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-lib-modules\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.228291 kubelet[2804]: I0307 01:39:30.227994 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-xtables-lock\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.228291 kubelet[2804]: I0307 01:39:30.228007 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cni-path\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.228291 kubelet[2804]: I0307 01:39:30.228033 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-run\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.228291 kubelet[2804]: I0307 01:39:30.228050 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-kernel\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.228291 kubelet[2804]: I0307 01:39:30.228040 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.228291 kubelet[2804]: I0307 01:39:30.228078 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.230365 kubelet[2804]: I0307 01:39:30.228061 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-cgroup\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.230365 kubelet[2804]: I0307 01:39:30.228096 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.230365 kubelet[2804]: I0307 01:39:30.228113 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-clustermesh-secrets\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.230365 kubelet[2804]: I0307 01:39:30.228144 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-config-path\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.230365 kubelet[2804]: I0307 01:39:30.228156 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hostproc\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.230365 kubelet[2804]: I0307 01:39:30.228248 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hubble-tls\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228271 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-etc-cni-netd\") pod \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\" (UID: \"12da5a98-f0c3-4605-b0f9-1d9b40d4db0a\") " Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228263 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cni-path" (OuterVolumeSpecName: "cni-path") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228364 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228375 2804 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228429 2804 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228443 2804 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.230583 kubelet[2804]: I0307 01:39:30.228458 2804 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vrtsj\" (UniqueName: \"kubernetes.io/projected/9614f4bb-a2e8-4929-8fdd-d4491f470b55-kube-api-access-vrtsj\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.230763 kubelet[2804]: I0307 01:39:30.228470 2804 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9614f4bb-a2e8-4929-8fdd-d4491f470b55-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.230763 kubelet[2804]: I0307 01:39:30.228496 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.230763 kubelet[2804]: I0307 01:39:30.228522 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hostproc" (OuterVolumeSpecName: "hostproc") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.232654 kubelet[2804]: I0307 01:39:30.232634 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:39:30.232863 kubelet[2804]: I0307 01:39:30.232732 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.233037 kubelet[2804]: I0307 01:39:30.232981 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.233037 kubelet[2804]: I0307 01:39:30.232984 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:39:30.233037 kubelet[2804]: I0307 01:39:30.233001 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:39:30.233266 kubelet[2804]: I0307 01:39:30.233062 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-kube-api-access-mpqz4" (OuterVolumeSpecName: "kube-api-access-mpqz4") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "kube-api-access-mpqz4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:39:30.233949 kubelet[2804]: I0307 01:39:30.233879 2804 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" (UID: "12da5a98-f0c3-4605-b0f9-1d9b40d4db0a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329530 2804 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329595 2804 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329610 2804 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329620 2804 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329635 2804 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329647 2804 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329659 2804 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.329788 kubelet[2804]: I0307 01:39:30.329665 2804 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.330081 kubelet[2804]: I0307 01:39:30.329673 2804 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.330081 kubelet[2804]: I0307 01:39:30.329679 2804 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpqz4\" (UniqueName: \"kubernetes.io/projected/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-kube-api-access-mpqz4\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.330081 kubelet[2804]: I0307 01:39:30.329687 2804 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 7 01:39:30.835059 systemd[1]: var-lib-kubelet-pods-9614f4bb\x2da2e8\x2d4929\x2d8fdd\x2dd4491f470b55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvrtsj.mount: Deactivated successfully. Mar 7 01:39:30.835387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2-shm.mount: Deactivated successfully. Mar 7 01:39:30.835497 systemd[1]: var-lib-kubelet-pods-12da5a98\x2df0c3\x2d4605\x2db0f9\x2d1d9b40d4db0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmpqz4.mount: Deactivated successfully. Mar 7 01:39:30.835594 systemd[1]: var-lib-kubelet-pods-12da5a98\x2df0c3\x2d4605\x2db0f9\x2d1d9b40d4db0a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 01:39:30.835689 systemd[1]: var-lib-kubelet-pods-12da5a98\x2df0c3\x2d4605\x2db0f9\x2d1d9b40d4db0a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 01:39:30.953030 systemd[1]: Removed slice kubepods-burstable-pod12da5a98_f0c3_4605_b0f9_1d9b40d4db0a.slice - libcontainer container kubepods-burstable-pod12da5a98_f0c3_4605_b0f9_1d9b40d4db0a.slice. Mar 7 01:39:30.953154 systemd[1]: kubepods-burstable-pod12da5a98_f0c3_4605_b0f9_1d9b40d4db0a.slice: Consumed 10.632s CPU time, 128.4M memory peak, 224K read from disk, 13.3M written to disk. Mar 7 01:39:30.954769 systemd[1]: Removed slice kubepods-besteffort-pod9614f4bb_a2e8_4929_8fdd_d4491f470b55.slice - libcontainer container kubepods-besteffort-pod9614f4bb_a2e8_4929_8fdd_d4491f470b55.slice. Mar 7 01:39:30.971031 kubelet[2804]: I0307 01:39:30.970898 2804 scope.go:117] "RemoveContainer" containerID="aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12" Mar 7 01:39:30.977926 containerd[1571]: time="2026-03-07T01:39:30.977012880Z" level=info msg="RemoveContainer for \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\"" Mar 7 01:39:30.987311 containerd[1571]: time="2026-03-07T01:39:30.987139801Z" level=info msg="RemoveContainer for \"aab0cf625ef5ec70cbb92187672fe7443a6b37908379ba1b8c4eef822378ba12\" returns successfully" Mar 7 01:39:30.987646 kubelet[2804]: I0307 01:39:30.987525 2804 scope.go:117] "RemoveContainer" containerID="5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a" Mar 7 01:39:30.989684 containerd[1571]: time="2026-03-07T01:39:30.989641876Z" level=info msg="RemoveContainer for \"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\"" Mar 7 01:39:30.995600 containerd[1571]: time="2026-03-07T01:39:30.995535077Z" level=info msg="RemoveContainer for \"5edeff818e066d5546fcbfb8070365c7fc93d9b5ca2ee57fe205286e4f7d355a\" returns successfully" Mar 7 01:39:30.995913 kubelet[2804]: I0307 01:39:30.995839 2804 scope.go:117] "RemoveContainer" containerID="97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d" Mar 7 01:39:31.000715 containerd[1571]: time="2026-03-07T01:39:31.000600370Z" level=info msg="RemoveContainer for \"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\"" Mar 7 01:39:31.008822 containerd[1571]: time="2026-03-07T01:39:31.008713236Z" level=info msg="RemoveContainer for \"97e8b332a4c70e333ac47667ce8c83b868131731e301cdb79ad67d05e425b50d\" returns successfully" Mar 7 01:39:31.009433 kubelet[2804]: I0307 01:39:31.009268 2804 scope.go:117] "RemoveContainer" containerID="41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d" Mar 7 01:39:31.011429 containerd[1571]: time="2026-03-07T01:39:31.011135651Z" level=info msg="RemoveContainer for \"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\"" Mar 7 01:39:31.017574 containerd[1571]: time="2026-03-07T01:39:31.017508597Z" level=info msg="RemoveContainer for \"41961096c81dade17d35e8667fc46ee466da660e02b820158b2e47d1613ba93d\" returns successfully" Mar 7 01:39:31.017891 kubelet[2804]: I0307 01:39:31.017863 2804 scope.go:117] "RemoveContainer" containerID="82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b" Mar 7 01:39:31.022518 containerd[1571]: time="2026-03-07T01:39:31.022434770Z" level=info msg="RemoveContainer for \"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\"" Mar 7 01:39:31.039210 containerd[1571]: time="2026-03-07T01:39:31.039117226Z" level=info msg="RemoveContainer for \"82d711408f0902760e872cc9bfbe4625a53ff7e415c8749a942cd396622cc62b\" returns successfully" Mar 7 01:39:31.039772 kubelet[2804]: I0307 01:39:31.039623 2804 scope.go:117] "RemoveContainer" containerID="cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266" Mar 7 01:39:31.042852 containerd[1571]: time="2026-03-07T01:39:31.042807608Z" level=info msg="RemoveContainer for \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\"" Mar 7 01:39:31.048524 containerd[1571]: time="2026-03-07T01:39:31.048455022Z" level=info msg="RemoveContainer for \"cceaadd89fc07d6b147d3211b4e74a3b558209a98798f7a032eefd77f6291266\" returns successfully" Mar 7 01:39:31.662276 sshd[4415]: Connection closed by 10.0.0.1 port 60772 Mar 7 01:39:31.664838 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:31.690370 systemd[1]: sshd@24-10.0.0.82:22-10.0.0.1:60772.service: Deactivated successfully. Mar 7 01:39:31.693837 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:39:31.702832 systemd-logind[1546]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:39:31.714862 systemd[1]: Started sshd@25-10.0.0.82:22-10.0.0.1:47014.service - OpenSSH per-connection server daemon (10.0.0.1:47014). Mar 7 01:39:31.718814 systemd-logind[1546]: Removed session 25. Mar 7 01:39:31.835666 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 47014 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:31.837694 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:31.847675 systemd-logind[1546]: New session 26 of user core. Mar 7 01:39:31.862615 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:39:32.599859 sshd[4567]: Connection closed by 10.0.0.1 port 47014 Mar 7 01:39:32.602640 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:32.628550 systemd[1]: sshd@25-10.0.0.82:22-10.0.0.1:47014.service: Deactivated successfully. Mar 7 01:39:32.640434 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:39:32.645732 systemd-logind[1546]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:39:32.650158 systemd[1]: Started sshd@26-10.0.0.82:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Mar 7 01:39:32.654711 systemd-logind[1546]: Removed session 26. Mar 7 01:39:32.721799 systemd[1]: Created slice kubepods-burstable-podc1d82381_500e_479e_9e6c_43eb63cec35c.slice - libcontainer container kubepods-burstable-podc1d82381_500e_479e_9e6c_43eb63cec35c.slice. Mar 7 01:39:32.774737 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:32.778944 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:32.804809 systemd-logind[1546]: New session 27 of user core. Mar 7 01:39:32.816814 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:39:32.843806 sshd[4583]: Connection closed by 10.0.0.1 port 47028 Mar 7 01:39:32.844666 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:32.854024 kubelet[2804]: I0307 01:39:32.853754 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1d82381-500e-479e-9e6c-43eb63cec35c-hubble-tls\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854024 kubelet[2804]: I0307 01:39:32.853848 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-etc-cni-netd\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854024 kubelet[2804]: I0307 01:39:32.853878 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-cilium-run\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854024 kubelet[2804]: I0307 01:39:32.853899 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-bpf-maps\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854024 kubelet[2804]: I0307 01:39:32.853921 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1d82381-500e-479e-9e6c-43eb63cec35c-cilium-ipsec-secrets\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854024 kubelet[2804]: I0307 01:39:32.853944 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-host-proc-sys-kernel\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854835 kubelet[2804]: I0307 01:39:32.853968 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-cilium-cgroup\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854835 kubelet[2804]: I0307 01:39:32.853990 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-cni-path\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854835 kubelet[2804]: I0307 01:39:32.854013 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-xtables-lock\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854835 kubelet[2804]: I0307 01:39:32.854037 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-host-proc-sys-net\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854835 kubelet[2804]: I0307 01:39:32.854063 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1d82381-500e-479e-9e6c-43eb63cec35c-cilium-config-path\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.854835 kubelet[2804]: I0307 01:39:32.854102 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-hostproc\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.855126 kubelet[2804]: I0307 01:39:32.854124 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1d82381-500e-479e-9e6c-43eb63cec35c-lib-modules\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.855126 kubelet[2804]: I0307 01:39:32.854144 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1d82381-500e-479e-9e6c-43eb63cec35c-clustermesh-secrets\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.855126 kubelet[2804]: I0307 01:39:32.854259 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrrf\" (UniqueName: \"kubernetes.io/projected/c1d82381-500e-479e-9e6c-43eb63cec35c-kube-api-access-tvrrf\") pod \"cilium-lgjrn\" (UID: \"c1d82381-500e-479e-9e6c-43eb63cec35c\") " pod="kube-system/cilium-lgjrn" Mar 7 01:39:32.858072 systemd[1]: sshd@26-10.0.0.82:22-10.0.0.1:47028.service: Deactivated successfully. Mar 7 01:39:32.861024 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:39:32.864307 systemd-logind[1546]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:39:32.867864 systemd[1]: Started sshd@27-10.0.0.82:22-10.0.0.1:47030.service - OpenSSH per-connection server daemon (10.0.0.1:47030). Mar 7 01:39:32.870155 systemd-logind[1546]: Removed session 27. Mar 7 01:39:32.949140 kubelet[2804]: I0307 01:39:32.948796 2804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12da5a98-f0c3-4605-b0f9-1d9b40d4db0a" path="/var/lib/kubelet/pods/12da5a98-f0c3-4605-b0f9-1d9b40d4db0a/volumes" Mar 7 01:39:32.950582 kubelet[2804]: I0307 01:39:32.950539 2804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9614f4bb-a2e8-4929-8fdd-d4491f470b55" path="/var/lib/kubelet/pods/9614f4bb-a2e8-4929-8fdd-d4491f470b55/volumes" Mar 7 01:39:33.020807 sshd[4591]: Accepted publickey for core from 10.0.0.1 port 47030 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:39:33.023818 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:39:33.040141 systemd-logind[1546]: New session 28 of user core. Mar 7 01:39:33.042969 kubelet[2804]: E0307 01:39:33.042917 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:33.044279 containerd[1571]: time="2026-03-07T01:39:33.044009718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgjrn,Uid:c1d82381-500e-479e-9e6c-43eb63cec35c,Namespace:kube-system,Attempt:0,}" Mar 7 01:39:33.050852 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:39:33.117437 containerd[1571]: time="2026-03-07T01:39:33.116689568Z" level=info msg="connecting to shim fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897" address="unix:///run/containerd/s/2dfa17850d80830e0d3e8d30a50a4353f779d24bb5440d739e739ee2b1048003" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:39:33.212617 systemd[1]: Started cri-containerd-fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897.scope - libcontainer container fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897. Mar 7 01:39:33.313730 containerd[1571]: time="2026-03-07T01:39:33.313547993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgjrn,Uid:c1d82381-500e-479e-9e6c-43eb63cec35c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\"" Mar 7 01:39:33.315282 kubelet[2804]: E0307 01:39:33.315069 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:33.330769 containerd[1571]: time="2026-03-07T01:39:33.330557643Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:39:33.415875 containerd[1571]: time="2026-03-07T01:39:33.415774838Z" level=info msg="Container 3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:39:33.444468 containerd[1571]: time="2026-03-07T01:39:33.443058825Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796\"" Mar 7 01:39:33.446367 containerd[1571]: time="2026-03-07T01:39:33.446238110Z" level=info msg="StartContainer for \"3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796\"" Mar 7 01:39:33.447919 containerd[1571]: time="2026-03-07T01:39:33.447803427Z" level=info msg="connecting to shim 3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796" address="unix:///run/containerd/s/2dfa17850d80830e0d3e8d30a50a4353f779d24bb5440d739e739ee2b1048003" protocol=ttrpc version=3 Mar 7 01:39:33.503713 systemd[1]: Started cri-containerd-3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796.scope - libcontainer container 3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796. Mar 7 01:39:33.591663 kubelet[2804]: E0307 01:39:33.591574 2804 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:39:33.603453 containerd[1571]: time="2026-03-07T01:39:33.603300088Z" level=info msg="StartContainer for \"3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796\" returns successfully" Mar 7 01:39:33.623479 systemd[1]: cri-containerd-3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796.scope: Deactivated successfully. Mar 7 01:39:33.633113 containerd[1571]: time="2026-03-07T01:39:33.633067152Z" level=info msg="received container exit event container_id:\"3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796\" id:\"3da9c30d7dc63d8e799bbc6ad2845f467cd54adf64bda1892aac79c3bd689796\" pid:4666 exited_at:{seconds:1772847573 nanos:632279715}" Mar 7 01:39:34.021099 kubelet[2804]: E0307 01:39:34.020981 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:34.029686 containerd[1571]: time="2026-03-07T01:39:34.029499918Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:39:34.067254 containerd[1571]: time="2026-03-07T01:39:34.066913964Z" level=info msg="Container 6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:39:34.095098 containerd[1571]: time="2026-03-07T01:39:34.095000434Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa\"" Mar 7 01:39:34.096492 containerd[1571]: time="2026-03-07T01:39:34.096463798Z" level=info msg="StartContainer for \"6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa\"" Mar 7 01:39:34.100245 containerd[1571]: time="2026-03-07T01:39:34.099954814Z" level=info msg="connecting to shim 6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa" address="unix:///run/containerd/s/2dfa17850d80830e0d3e8d30a50a4353f779d24bb5440d739e739ee2b1048003" protocol=ttrpc version=3 Mar 7 01:39:34.139642 systemd[1]: Started cri-containerd-6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa.scope - libcontainer container 6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa. Mar 7 01:39:34.236622 containerd[1571]: time="2026-03-07T01:39:34.236459817Z" level=info msg="StartContainer for \"6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa\" returns successfully" Mar 7 01:39:34.247437 systemd[1]: cri-containerd-6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa.scope: Deactivated successfully. Mar 7 01:39:34.254234 containerd[1571]: time="2026-03-07T01:39:34.251408346Z" level=info msg="received container exit event container_id:\"6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa\" id:\"6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa\" pid:4710 exited_at:{seconds:1772847574 nanos:250467745}" Mar 7 01:39:34.328564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6820a9063e0453a731ab95708ad23146a6026da369fadacedad1d476f10e4efa-rootfs.mount: Deactivated successfully. Mar 7 01:39:35.038388 kubelet[2804]: E0307 01:39:35.037953 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:35.055075 containerd[1571]: time="2026-03-07T01:39:35.054914874Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:39:35.117790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339911041.mount: Deactivated successfully. Mar 7 01:39:35.120310 containerd[1571]: time="2026-03-07T01:39:35.117924297Z" level=info msg="Container cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:39:35.123394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093287389.mount: Deactivated successfully. Mar 7 01:39:35.132244 containerd[1571]: time="2026-03-07T01:39:35.132081632Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734\"" Mar 7 01:39:35.135055 containerd[1571]: time="2026-03-07T01:39:35.133451675Z" level=info msg="StartContainer for \"cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734\"" Mar 7 01:39:35.135887 containerd[1571]: time="2026-03-07T01:39:35.135852589Z" level=info msg="connecting to shim cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734" address="unix:///run/containerd/s/2dfa17850d80830e0d3e8d30a50a4353f779d24bb5440d739e739ee2b1048003" protocol=ttrpc version=3 Mar 7 01:39:35.219932 systemd[1]: Started cri-containerd-cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734.scope - libcontainer container cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734. Mar 7 01:39:35.489536 containerd[1571]: time="2026-03-07T01:39:35.489439910Z" level=info msg="StartContainer for \"cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734\" returns successfully" Mar 7 01:39:35.507057 systemd[1]: cri-containerd-cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734.scope: Deactivated successfully. Mar 7 01:39:35.508763 containerd[1571]: time="2026-03-07T01:39:35.508604691Z" level=info msg="received container exit event container_id:\"cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734\" id:\"cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734\" pid:4753 exited_at:{seconds:1772847575 nanos:508235613}" Mar 7 01:39:35.554481 kubelet[2804]: I0307 01:39:35.551533 2804 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T01:39:35Z","lastTransitionTime":"2026-03-07T01:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 01:39:35.590773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb55c9329e5d2d7630420d11a8dd0cd3f2a8775dad0b6bfae30f30572cd0b734-rootfs.mount: Deactivated successfully. Mar 7 01:39:36.058981 kubelet[2804]: E0307 01:39:36.058893 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:36.067776 containerd[1571]: time="2026-03-07T01:39:36.067495442Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:39:36.117279 containerd[1571]: time="2026-03-07T01:39:36.117120245Z" level=info msg="Container 941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:39:36.130280 containerd[1571]: time="2026-03-07T01:39:36.130099987Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f\"" Mar 7 01:39:36.132253 containerd[1571]: time="2026-03-07T01:39:36.131422461Z" level=info msg="StartContainer for \"941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f\"" Mar 7 01:39:36.132894 containerd[1571]: time="2026-03-07T01:39:36.132861011Z" level=info msg="connecting to shim 941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f" address="unix:///run/containerd/s/2dfa17850d80830e0d3e8d30a50a4353f779d24bb5440d739e739ee2b1048003" protocol=ttrpc version=3 Mar 7 01:39:36.190153 systemd[1]: Started cri-containerd-941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f.scope - libcontainer container 941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f. Mar 7 01:39:36.261488 systemd[1]: cri-containerd-941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f.scope: Deactivated successfully. Mar 7 01:39:36.268268 containerd[1571]: time="2026-03-07T01:39:36.268226927Z" level=info msg="received container exit event container_id:\"941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f\" id:\"941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f\" pid:4793 exited_at:{seconds:1772847576 nanos:267910249}" Mar 7 01:39:36.268767 containerd[1571]: time="2026-03-07T01:39:36.268592360Z" level=info msg="StartContainer for \"941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f\" returns successfully" Mar 7 01:39:36.355463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-941e07edf4878fada9beb4bc53be574121aafa2b921caa92b7447e48f6f3dc8f-rootfs.mount: Deactivated successfully. Mar 7 01:39:37.094493 kubelet[2804]: E0307 01:39:37.094424 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:37.108395 containerd[1571]: time="2026-03-07T01:39:37.108146443Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:39:37.157396 containerd[1571]: time="2026-03-07T01:39:37.155150489Z" level=info msg="Container 135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:39:37.158108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745375780.mount: Deactivated successfully. Mar 7 01:39:37.199728 containerd[1571]: time="2026-03-07T01:39:37.199586314Z" level=info msg="CreateContainer within sandbox \"fda67edecb19caabef75beb408a35d1346494e207cfd6da17e57540d289e1897\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39\"" Mar 7 01:39:37.202111 containerd[1571]: time="2026-03-07T01:39:37.202045475Z" level=info msg="StartContainer for \"135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39\"" Mar 7 01:39:37.203705 containerd[1571]: time="2026-03-07T01:39:37.203617305Z" level=info msg="connecting to shim 135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39" address="unix:///run/containerd/s/2dfa17850d80830e0d3e8d30a50a4353f779d24bb5440d739e739ee2b1048003" protocol=ttrpc version=3 Mar 7 01:39:37.245817 systemd[1]: Started cri-containerd-135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39.scope - libcontainer container 135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39. Mar 7 01:39:37.353246 containerd[1571]: time="2026-03-07T01:39:37.352678168Z" level=info msg="StartContainer for \"135494557c90423a1d3413bc1e3be0917b727291f67747f1d1323e36a0569d39\" returns successfully" Mar 7 01:39:38.104146 kubelet[2804]: E0307 01:39:38.103996 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:38.147075 kubelet[2804]: I0307 01:39:38.146926 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lgjrn" podStartSLOduration=6.146901569 podStartE2EDuration="6.146901569s" podCreationTimestamp="2026-03-07 01:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:39:38.143499411 +0000 UTC m=+115.342405433" watchObservedRunningTime="2026-03-07 01:39:38.146901569 +0000 UTC m=+115.345807580" Mar 7 01:39:38.228344 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 7 01:39:39.116156 kubelet[2804]: E0307 01:39:39.116094 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:42.946884 containerd[1571]: time="2026-03-07T01:39:42.946426242Z" level=info msg="StopPodSandbox for \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\"" Mar 7 01:39:42.946884 containerd[1571]: time="2026-03-07T01:39:42.946596119Z" level=info msg="TearDown network for sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" successfully" Mar 7 01:39:42.946884 containerd[1571]: time="2026-03-07T01:39:42.946612759Z" level=info msg="StopPodSandbox for \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" returns successfully" Mar 7 01:39:42.950774 containerd[1571]: time="2026-03-07T01:39:42.947037721Z" level=info msg="RemovePodSandbox for \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\"" Mar 7 01:39:42.950774 containerd[1571]: time="2026-03-07T01:39:42.947142397Z" level=info msg="Forcibly stopping sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\"" Mar 7 01:39:42.950774 containerd[1571]: time="2026-03-07T01:39:42.947292337Z" level=info msg="TearDown network for sandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" successfully" Mar 7 01:39:42.954624 kubelet[2804]: E0307 01:39:42.946900 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:42.964607 containerd[1571]: time="2026-03-07T01:39:42.964438694Z" level=info msg="Ensure that sandbox 6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2 in task-service has been cleanup successfully" Mar 7 01:39:42.994250 containerd[1571]: time="2026-03-07T01:39:42.993461004Z" level=info msg="RemovePodSandbox \"6223e4835ef3826aaf68f51c16885a993c2275fd28fba94f55e08187169cbab2\" returns successfully" Mar 7 01:39:43.003655 containerd[1571]: time="2026-03-07T01:39:43.003599798Z" level=info msg="StopPodSandbox for \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\"" Mar 7 01:39:43.006261 containerd[1571]: time="2026-03-07T01:39:43.006155944Z" level=info msg="TearDown network for sandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" successfully" Mar 7 01:39:43.006458 containerd[1571]: time="2026-03-07T01:39:43.006426538Z" level=info msg="StopPodSandbox for \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" returns successfully" Mar 7 01:39:43.007781 containerd[1571]: time="2026-03-07T01:39:43.007560269Z" level=info msg="RemovePodSandbox for \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\"" Mar 7 01:39:43.007781 containerd[1571]: time="2026-03-07T01:39:43.007657099Z" level=info msg="Forcibly stopping sandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\"" Mar 7 01:39:43.008046 containerd[1571]: time="2026-03-07T01:39:43.007885638Z" level=info msg="TearDown network for sandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" successfully" Mar 7 01:39:43.010354 containerd[1571]: time="2026-03-07T01:39:43.010304306Z" level=info msg="Ensure that sandbox e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181 in task-service has been cleanup successfully" Mar 7 01:39:43.025262 containerd[1571]: time="2026-03-07T01:39:43.025136933Z" level=info msg="RemovePodSandbox \"e475878d175224686729474a5730ab29c24983fef4795a4c3944fc4409d60181\" returns successfully" Mar 7 01:39:44.212111 systemd-networkd[1458]: lxc_health: Link UP Mar 7 01:39:44.216490 systemd-networkd[1458]: lxc_health: Gained carrier Mar 7 01:39:45.037302 kubelet[2804]: E0307 01:39:45.037139 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:45.162271 kubelet[2804]: E0307 01:39:45.161433 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:45.939725 systemd-networkd[1458]: lxc_health: Gained IPv6LL Mar 7 01:39:46.169268 kubelet[2804]: E0307 01:39:46.169067 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:58.252080 sshd[4599]: Connection closed by 10.0.0.1 port 47030 Mar 7 01:39:58.252997 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Mar 7 01:39:58.259611 systemd[1]: sshd@27-10.0.0.82:22-10.0.0.1:47030.service: Deactivated successfully. Mar 7 01:39:58.262857 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:39:58.265733 systemd-logind[1546]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:39:58.269354 systemd-logind[1546]: Removed session 28.