Jan 20 00:48:28.083041 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:48:28.083074 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:48:28.083092 kernel: BIOS-provided physical RAM map: Jan 20 00:48:28.083158 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 00:48:28.083170 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 00:48:28.083179 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 00:48:28.083191 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 00:48:28.083202 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 00:48:28.083212 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:48:28.083227 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 00:48:28.083237 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 00:48:28.083246 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 00:48:28.083256 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 00:48:28.083265 kernel: NX (Execute Disable) protection: active Jan 20 00:48:28.083276 kernel: APIC: Static calls initialized Jan 20 00:48:28.083293 kernel: SMBIOS 2.8 present. Jan 20 00:48:28.083305 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 00:48:28.083316 kernel: Hypervisor detected: KVM Jan 20 00:48:28.083325 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:48:28.083335 kernel: kvm-clock: using sched offset of 4468322498 cycles Jan 20 00:48:28.083346 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:48:28.083357 kernel: tsc: Detected 2445.424 MHz processor Jan 20 00:48:28.083367 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:48:28.083377 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:48:28.083387 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 00:48:28.083401 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 00:48:28.083411 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:48:28.083421 kernel: Using GB pages for direct mapping Jan 20 00:48:28.083432 kernel: ACPI: Early table checksum verification disabled Jan 20 00:48:28.083442 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 00:48:28.083452 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083462 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083472 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083486 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 00:48:28.083495 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083505 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083515 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083525 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:28.083535 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 00:48:28.083545 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 00:48:28.083561 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 00:48:28.083575 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 00:48:28.083586 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 00:48:28.083596 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 00:48:28.083608 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 00:48:28.083668 kernel: No NUMA configuration found Jan 20 00:48:28.083681 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 00:48:28.083691 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 20 00:48:28.083706 kernel: Zone ranges: Jan 20 00:48:28.083716 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:48:28.083727 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 00:48:28.083737 kernel: Normal empty Jan 20 00:48:28.083748 kernel: Movable zone start for each node Jan 20 00:48:28.083759 kernel: Early memory node ranges Jan 20 00:48:28.083770 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 00:48:28.083781 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 00:48:28.083792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 00:48:28.083807 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:48:28.083817 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 00:48:28.083828 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 00:48:28.083839 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:48:28.083850 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:48:28.083861 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:48:28.083871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:48:28.083883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:48:28.083893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:48:28.083904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:48:28.083919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:48:28.083930 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:48:28.083940 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:48:28.083951 kernel: TSC deadline timer available Jan 20 00:48:28.083961 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:48:28.083972 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:48:28.083982 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:48:28.083993 kernel: kvm-guest: setup PV sched yield Jan 20 00:48:28.084004 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 00:48:28.084019 kernel: Booting paravirtualized kernel on KVM Jan 20 00:48:28.084030 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:48:28.084040 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:48:28.084051 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:48:28.084061 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:48:28.084072 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:48:28.084082 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:48:28.084092 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:48:28.084171 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:48:28.084187 kernel: random: crng init done Jan 20 00:48:28.084198 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:48:28.084209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:48:28.084219 kernel: Fallback order for Node 0: 0 Jan 20 00:48:28.084229 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 20 00:48:28.084240 kernel: Policy zone: DMA32 Jan 20 00:48:28.084250 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:48:28.084261 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 136884K reserved, 0K cma-reserved) Jan 20 00:48:28.084275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:48:28.084286 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:48:28.084296 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:48:28.084306 kernel: Dynamic Preempt: voluntary Jan 20 00:48:28.084317 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:48:28.084328 kernel: rcu: RCU event tracing is enabled. Jan 20 00:48:28.084339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:48:28.084350 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:48:28.084360 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:48:28.084374 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:48:28.084385 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:48:28.084396 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:48:28.084406 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:48:28.084417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:48:28.084427 kernel: Console: colour VGA+ 80x25 Jan 20 00:48:28.084438 kernel: printk: console [ttyS0] enabled Jan 20 00:48:28.084448 kernel: ACPI: Core revision 20230628 Jan 20 00:48:28.084458 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:48:28.084469 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:48:28.084483 kernel: x2apic enabled Jan 20 00:48:28.084494 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:48:28.084504 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:48:28.084515 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:48:28.084525 kernel: kvm-guest: setup PV IPIs Jan 20 00:48:28.084536 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:48:28.084560 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:48:28.084571 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 00:48:28.084583 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:48:28.084593 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:48:28.084604 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:48:28.084670 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:48:28.084685 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:48:28.084698 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:48:28.084711 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:48:28.084721 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:48:28.084736 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:48:28.084747 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:48:28.084757 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:48:28.084767 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:48:28.084777 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:48:28.084787 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:48:28.084798 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:48:28.084808 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:48:28.084821 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:48:28.084831 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:48:28.084841 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:48:28.084851 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:48:28.084861 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:48:28.084872 kernel: landlock: Up and running. Jan 20 00:48:28.084882 kernel: SELinux: Initializing. Jan 20 00:48:28.084892 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:48:28.084902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:48:28.084915 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:48:28.084925 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:48:28.084938 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:48:28.084948 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:48:28.084959 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:48:28.084969 kernel: signal: max sigframe size: 1776 Jan 20 00:48:28.084979 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:48:28.084990 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:48:28.085000 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:48:28.085013 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:48:28.085023 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:48:28.085033 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:48:28.085043 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:48:28.085053 kernel: smpboot: Max logical packages: 1 Jan 20 00:48:28.085063 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 00:48:28.085073 kernel: devtmpfs: initialized Jan 20 00:48:28.085083 kernel: x86/mm: Memory block size: 128MB Jan 20 00:48:28.085093 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:48:28.085160 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:48:28.085170 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:48:28.085180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:48:28.085190 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:48:28.085200 kernel: audit: type=2000 audit(1768870105.786:1): state=initialized audit_enabled=0 res=1 Jan 20 00:48:28.085210 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:48:28.085220 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:48:28.085230 kernel: cpuidle: using governor menu Jan 20 00:48:28.085240 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:48:28.085253 kernel: dca service started, version 1.12.1 Jan 20 00:48:28.085263 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:48:28.085273 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:48:28.085283 kernel: PCI: Using configuration type 1 for base access Jan 20 00:48:28.085293 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:48:28.085304 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:48:28.085314 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:48:28.085326 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:48:28.085337 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:48:28.085351 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:48:28.085361 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:48:28.085371 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:48:28.085381 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:48:28.085391 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:48:28.085401 kernel: ACPI: Interpreter enabled Jan 20 00:48:28.085411 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:48:28.085421 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:48:28.085431 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:48:28.085444 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:48:28.085454 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:48:28.085465 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:48:28.085725 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:48:28.085894 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:48:28.086048 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:48:28.086061 kernel: PCI host bridge to bus 0000:00 Jan 20 00:48:28.086289 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:48:28.086434 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:48:28.086575 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:48:28.086768 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:48:28.086909 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:48:28.087047 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 00:48:28.087251 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:48:28.087505 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:48:28.087743 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:48:28.087925 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 20 00:48:28.088255 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 20 00:48:28.088460 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 20 00:48:28.088706 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:48:28.088924 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:48:28.089196 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 00:48:28.089444 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 20 00:48:28.089688 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 00:48:28.089910 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:48:28.090171 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 00:48:28.090373 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 20 00:48:28.090576 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 00:48:28.090838 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:48:28.091038 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 20 00:48:28.091300 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 20 00:48:28.091498 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 00:48:28.091739 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 20 00:48:28.091951 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:48:28.092310 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:48:28.092680 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:48:28.092845 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 20 00:48:28.092998 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 20 00:48:28.093238 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:48:28.093396 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 00:48:28.093409 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:48:28.093425 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:48:28.093436 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:48:28.093446 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:48:28.093456 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:48:28.093466 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:48:28.093476 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:48:28.093486 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:48:28.093496 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:48:28.093506 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:48:28.093519 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:48:28.093529 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:48:28.093539 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:48:28.093549 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:48:28.093559 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:48:28.093569 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:48:28.093579 kernel: iommu: Default domain type: Translated Jan 20 00:48:28.093589 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:48:28.093599 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:48:28.093613 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:48:28.093666 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 00:48:28.093677 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 00:48:28.093838 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:48:28.093991 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:48:28.094231 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:48:28.094246 kernel: vgaarb: loaded Jan 20 00:48:28.094257 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:48:28.094272 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:48:28.094283 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:48:28.094293 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:48:28.094303 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:48:28.094313 kernel: pnp: PnP ACPI init Jan 20 00:48:28.094480 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:48:28.094494 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:48:28.094504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:48:28.094518 kernel: NET: Registered PF_INET protocol family Jan 20 00:48:28.094529 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:48:28.094539 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:48:28.094550 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:48:28.094560 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:48:28.094570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:48:28.094581 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:48:28.094591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:48:28.094601 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:48:28.094614 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:48:28.094667 kernel: NET: Registered PF_XDP protocol family Jan 20 00:48:28.094816 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:48:28.094958 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:48:28.095152 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:48:28.095303 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:48:28.095441 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:48:28.095579 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 00:48:28.095596 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:48:28.095606 kernel: Initialise system trusted keyrings Jan 20 00:48:28.095656 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:48:28.095668 kernel: Key type asymmetric registered Jan 20 00:48:28.095678 kernel: Asymmetric key parser 'x509' registered Jan 20 00:48:28.095688 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:48:28.095699 kernel: io scheduler mq-deadline registered Jan 20 00:48:28.095709 kernel: io scheduler kyber registered Jan 20 00:48:28.095719 kernel: io scheduler bfq registered Jan 20 00:48:28.095729 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:48:28.095743 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:48:28.095754 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:48:28.095764 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:48:28.095774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:48:28.095784 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:48:28.095794 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:48:28.095804 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:48:28.095814 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:48:28.095975 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:48:28.095993 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:48:28.096202 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:48:28.096530 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:48:27 UTC (1768870107) Jan 20 00:48:28.096762 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:48:28.096781 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:48:28.096792 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:48:28.096803 kernel: Segment Routing with IPv6 Jan 20 00:48:28.096820 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:48:28.096832 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:48:28.096843 kernel: Key type dns_resolver registered Jan 20 00:48:28.096854 kernel: IPI shorthand broadcast: enabled Jan 20 00:48:28.096865 kernel: sched_clock: Marking stable (1031020248, 293301940)->(1646480149, -322157961) Jan 20 00:48:28.096876 kernel: registered taskstats version 1 Jan 20 00:48:28.096888 kernel: Loading compiled-in X.509 certificates Jan 20 00:48:28.096898 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:48:28.096909 kernel: Key type .fscrypt registered Jan 20 00:48:28.096921 kernel: Key type fscrypt-provisioning registered Jan 20 00:48:28.096936 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:48:28.096947 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:48:28.096958 kernel: ima: No architecture policies found Jan 20 00:48:28.096969 kernel: clk: Disabling unused clocks Jan 20 00:48:28.096980 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:48:28.096991 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:48:28.097002 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:48:28.097012 kernel: Run /init as init process Jan 20 00:48:28.097028 kernel: with arguments: Jan 20 00:48:28.097038 kernel: /init Jan 20 00:48:28.097050 kernel: with environment: Jan 20 00:48:28.097061 kernel: HOME=/ Jan 20 00:48:28.097071 kernel: TERM=linux Jan 20 00:48:28.097084 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:48:28.097219 systemd[1]: Detected virtualization kvm. Jan 20 00:48:28.097235 systemd[1]: Detected architecture x86-64. Jan 20 00:48:28.097255 systemd[1]: Running in initrd. Jan 20 00:48:28.097267 systemd[1]: No hostname configured, using default hostname. Jan 20 00:48:28.097279 systemd[1]: Hostname set to . Jan 20 00:48:28.097293 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:48:28.097306 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:48:28.097319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:48:28.097332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:48:28.097345 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:48:28.097363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:48:28.097376 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:48:28.097389 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:48:28.097404 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:48:28.097418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:48:28.097430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:48:28.097443 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:48:28.097461 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:48:28.097475 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:48:28.097487 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:48:28.097520 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:48:28.097538 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:48:28.097551 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:48:28.097568 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:48:28.097581 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:48:28.097595 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:48:28.097608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:48:28.097663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:48:28.097678 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:48:28.097691 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:48:28.097705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:48:28.097718 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:48:28.097736 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:48:28.097749 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:48:28.097763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:48:28.097775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:28.097816 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:48:28.097852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:48:28.097867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:48:28.097879 systemd-journald[194]: Journal started Jan 20 00:48:28.097906 systemd-journald[194]: Runtime Journal (/run/log/journal/82118951f10446948682d76430eb6e72) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:48:28.104869 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:48:28.108337 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:48:28.108428 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:48:28.119408 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:48:28.123313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:48:28.130590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:48:28.133835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:48:28.169515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:48:28.170257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:48:28.328930 kernel: Bridge firewalling registered Jan 20 00:48:28.171389 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:48:28.326881 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:48:28.334740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:28.337335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:48:28.369500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:48:28.372218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:48:28.386333 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:28.392254 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:48:28.402792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:48:28.407509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:48:28.417474 dracut-cmdline[228]: dracut-dracut-053 Jan 20 00:48:28.422301 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:48:28.468737 systemd-resolved[232]: Positive Trust Anchors: Jan 20 00:48:28.468770 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:48:28.468796 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:48:28.472488 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 20 00:48:28.474240 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:48:28.499770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:48:28.550192 kernel: SCSI subsystem initialized Jan 20 00:48:28.559153 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:48:28.573218 kernel: iscsi: registered transport (tcp) Jan 20 00:48:28.595290 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:48:28.595356 kernel: QLogic iSCSI HBA Driver Jan 20 00:48:28.646360 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:48:28.657278 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:48:28.688076 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:48:28.688153 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:48:28.692177 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:48:28.734226 kernel: raid6: avx2x4 gen() 34201 MB/s Jan 20 00:48:28.752208 kernel: raid6: avx2x2 gen() 30807 MB/s Jan 20 00:48:28.771524 kernel: raid6: avx2x1 gen() 25574 MB/s Jan 20 00:48:28.771608 kernel: raid6: using algorithm avx2x4 gen() 34201 MB/s Jan 20 00:48:28.791760 kernel: raid6: .... xor() 4779 MB/s, rmw enabled Jan 20 00:48:28.791842 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:48:28.819191 kernel: xor: automatically using best checksumming function avx Jan 20 00:48:28.981191 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:48:28.996215 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:48:29.015378 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:48:29.037494 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 20 00:48:29.045491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:48:29.073449 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:48:29.088702 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 20 00:48:29.127762 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:48:29.144335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:48:29.223529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:48:29.240055 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:48:29.251538 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:48:29.257024 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:48:29.268261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:48:29.282664 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:48:29.278344 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:48:29.299216 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:48:29.299462 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:48:29.299483 kernel: AES CTR mode by8 optimization enabled Jan 20 00:48:29.299721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:48:29.316401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:48:29.326739 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:48:29.316513 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:29.343276 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:48:29.343336 kernel: GPT:9289727 != 19775487 Jan 20 00:48:29.343346 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:48:29.343356 kernel: GPT:9289727 != 19775487 Jan 20 00:48:29.343364 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:48:29.343373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:29.344844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:48:29.355489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:48:29.356049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:29.365314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:29.386159 kernel: libata version 3.00 loaded. Jan 20 00:48:29.391523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:29.393351 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:48:29.424800 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:48:29.425189 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:48:29.436204 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:48:29.436456 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:48:29.436897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:48:29.450987 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (464) Jan 20 00:48:29.451008 kernel: scsi host0: ahci Jan 20 00:48:29.451403 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jan 20 00:48:29.451415 kernel: scsi host1: ahci Jan 20 00:48:29.453158 kernel: scsi host2: ahci Jan 20 00:48:29.453403 kernel: scsi host3: ahci Jan 20 00:48:29.456375 kernel: scsi host4: ahci Jan 20 00:48:29.456549 kernel: scsi host5: ahci Jan 20 00:48:29.456738 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 20 00:48:29.456749 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 20 00:48:29.456759 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 20 00:48:29.456768 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 20 00:48:29.456777 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 20 00:48:29.456786 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 20 00:48:29.641448 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:29.660428 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:48:29.686429 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:48:29.693749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:48:29.712784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:48:29.734458 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:48:29.740448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:48:29.756967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:29.756995 disk-uuid[569]: Primary Header is updated. Jan 20 00:48:29.756995 disk-uuid[569]: Secondary Entries is updated. Jan 20 00:48:29.756995 disk-uuid[569]: Secondary Header is updated. Jan 20 00:48:29.781807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:29.781836 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:29.781854 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:29.781878 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:29.781894 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:29.781910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:29.781924 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:29.782093 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:29.803774 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:48:29.803825 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:48:29.803840 kernel: ata3.00: applying bridge limits Jan 20 00:48:29.806230 kernel: ata3.00: configured for UDMA/100 Jan 20 00:48:29.821061 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:48:29.878013 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:48:29.878420 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:48:30.035579 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:48:30.782174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:30.782722 disk-uuid[571]: The operation has completed successfully. Jan 20 00:48:30.829037 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:48:30.829663 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:48:30.867076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:48:30.877843 sh[597]: Success Jan 20 00:48:30.900184 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:48:30.959353 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:48:30.981565 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:48:30.992253 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:48:31.008304 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:48:31.008354 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:31.008383 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:48:31.011539 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:48:31.013940 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:48:31.026431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:48:31.030048 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:48:31.047452 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:48:31.051785 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:48:31.163894 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:31.163966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:31.163978 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:48:31.174190 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:48:31.188590 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:48:31.194320 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:31.205158 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:48:31.217481 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:48:31.647021 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:48:31.767035 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:48:31.810057 systemd-networkd[782]: lo: Link UP Jan 20 00:48:31.810092 systemd-networkd[782]: lo: Gained carrier Jan 20 00:48:31.812864 systemd-networkd[782]: Enumeration completed Jan 20 00:48:31.812997 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:48:31.816806 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:31.816812 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:48:31.818756 systemd-networkd[782]: eth0: Link UP Jan 20 00:48:31.838087 ignition[725]: Ignition 2.19.0 Jan 20 00:48:31.818763 systemd-networkd[782]: eth0: Gained carrier Jan 20 00:48:31.838146 ignition[725]: Stage: fetch-offline Jan 20 00:48:31.818774 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:31.838281 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:31.819920 systemd[1]: Reached target network.target - Network. Jan 20 00:48:31.838297 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:31.964774 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:48:31.838608 ignition[725]: parsed url from cmdline: "" Jan 20 00:48:31.838613 ignition[725]: no config URL provided Jan 20 00:48:31.838619 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:48:31.838672 ignition[725]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:48:31.838755 ignition[725]: op(1): [started] loading QEMU firmware config module Jan 20 00:48:31.838766 ignition[725]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:48:31.981777 ignition[725]: op(1): [finished] loading QEMU firmware config module Jan 20 00:48:32.227207 ignition[725]: parsing config with SHA512: 28bf665648b1fd48b423a22ab0785b34c0459cf2c6da2aa05709df58b835cd41c9bf5324d9c6d5a5d50ece5ad5347dac5547b9b044aa95be0fe7f5aca0a8b29a Jan 20 00:48:32.239968 unknown[725]: fetched base config from "system" Jan 20 00:48:32.239993 unknown[725]: fetched user config from "qemu" Jan 20 00:48:32.240852 ignition[725]: fetch-offline: fetch-offline passed Jan 20 00:48:32.241031 ignition[725]: Ignition finished successfully Jan 20 00:48:32.249978 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:48:32.255689 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:48:32.277372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:48:32.330599 ignition[789]: Ignition 2.19.0 Jan 20 00:48:32.330668 ignition[789]: Stage: kargs Jan 20 00:48:32.331156 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:32.331171 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:32.332009 ignition[789]: kargs: kargs passed Jan 20 00:48:32.332067 ignition[789]: Ignition finished successfully Jan 20 00:48:32.354895 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:48:32.370440 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:48:32.409223 ignition[797]: Ignition 2.19.0 Jan 20 00:48:32.409269 ignition[797]: Stage: disks Jan 20 00:48:32.413692 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:48:32.409480 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:32.422341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:48:32.409495 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:32.431051 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:48:32.410593 ignition[797]: disks: disks passed Jan 20 00:48:32.436404 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:48:32.410702 ignition[797]: Ignition finished successfully Jan 20 00:48:32.440860 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:48:32.445614 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:48:32.480521 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:48:32.514763 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:48:32.520576 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:48:32.538228 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:48:32.662887 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:48:32.665585 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:48:32.673253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:48:32.688326 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:48:32.697813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:48:32.708384 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 20 00:48:32.708682 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:48:32.724330 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:32.724363 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:32.724381 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:48:32.708794 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:48:32.708822 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:48:32.742798 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:48:32.757207 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:48:32.763415 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:48:32.768407 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:48:32.824329 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:48:32.830742 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:48:32.837916 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:48:32.843372 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:48:33.007209 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:48:33.026494 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:48:33.039612 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:48:33.048918 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:48:33.056406 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:33.095683 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:48:33.289725 ignition[928]: INFO : Ignition 2.19.0 Jan 20 00:48:33.289725 ignition[928]: INFO : Stage: mount Jan 20 00:48:33.295618 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:33.295618 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:33.307339 ignition[928]: INFO : mount: mount passed Jan 20 00:48:33.307339 ignition[928]: INFO : Ignition finished successfully Jan 20 00:48:33.311460 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:48:33.332461 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:48:33.684537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:48:33.790716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 20 00:48:33.790836 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:33.790855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:33.793898 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:48:33.804175 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:48:33.806363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:48:33.857680 ignition[958]: INFO : Ignition 2.19.0 Jan 20 00:48:33.857680 ignition[958]: INFO : Stage: files Jan 20 00:48:33.866292 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:33.866292 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:33.866292 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:48:33.864289 systemd-networkd[782]: eth0: Gained IPv6LL Jan 20 00:48:33.881462 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:48:33.881462 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:48:33.890475 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:48:33.895041 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:48:33.895041 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:48:33.895041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:48:33.895041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 00:48:33.891726 unknown[958]: wrote ssh authorized keys file for user: core Jan 20 00:48:34.106560 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:48:34.582885 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:48:34.582885 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:48:34.582885 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 00:48:34.737995 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 00:48:35.088388 kernel: hrtimer: interrupt took 3894292 ns Jan 20 00:48:35.219959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:48:35.219959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:48:35.233738 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 00:48:35.672828 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 00:48:37.977162 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:48:37.977162 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 00:48:37.989633 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:48:38.050489 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:48:38.050489 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:48:38.071746 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:48:38.071746 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:48:38.071746 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:48:38.071746 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:48:38.071746 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:48:38.071746 ignition[958]: INFO : files: files passed Jan 20 00:48:38.071746 ignition[958]: INFO : Ignition finished successfully Jan 20 00:48:38.071887 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:48:38.122566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:48:38.137325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:48:38.143753 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:48:38.143861 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:48:38.172200 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:48:38.179854 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:48:38.179854 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:48:38.175478 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:48:38.195430 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:48:38.180493 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:48:38.207543 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:48:38.240850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:48:38.241018 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:48:38.248352 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:48:38.255226 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:48:38.263434 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:48:38.280450 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:48:38.304544 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:48:38.328491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:48:38.342337 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:48:38.346451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:48:38.354369 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:48:38.361378 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:48:38.361617 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:48:38.371552 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:48:38.377312 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:48:38.384404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:48:38.390839 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:48:38.397451 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:48:38.404046 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:48:38.410503 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:48:38.418225 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:48:38.425996 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:48:38.429791 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:48:38.432888 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:48:38.433018 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:48:38.439894 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:48:38.446321 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:48:38.452842 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:48:38.453200 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:48:38.459193 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:48:38.459383 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:48:38.468357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:48:38.468511 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:48:38.474946 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:48:38.481810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:48:38.485425 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:48:38.491086 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:48:38.498311 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:48:38.506366 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:48:38.506493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:48:38.513170 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:48:38.513291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:48:38.520475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:48:38.520617 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:48:38.529453 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:48:38.529562 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:48:38.557602 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:48:38.566581 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:48:38.566834 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:48:38.578691 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:48:38.583623 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:48:38.583870 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:48:38.594285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:48:38.595170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:48:38.613560 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:48:38.613726 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:48:38.643067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:48:38.756850 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:48:38.757046 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:48:38.797037 ignition[1014]: INFO : Ignition 2.19.0 Jan 20 00:48:38.797037 ignition[1014]: INFO : Stage: umount Jan 20 00:48:38.810836 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:38.810836 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:38.810836 ignition[1014]: INFO : umount: umount passed Jan 20 00:48:38.810836 ignition[1014]: INFO : Ignition finished successfully Jan 20 00:48:38.800553 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:48:38.800824 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:48:38.805450 systemd[1]: Stopped target network.target - Network. Jan 20 00:48:38.810752 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:48:38.810854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:48:38.819712 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:48:38.819789 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:48:38.828378 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:48:38.828456 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:48:38.836058 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:48:38.836208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:48:38.844572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:48:38.844712 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:48:38.851432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:48:38.860276 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:48:38.860505 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 20 00:48:38.870162 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:48:38.870410 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:48:38.876402 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:48:38.876549 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:48:38.885993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:48:38.886088 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:48:38.906353 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:48:38.910380 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:48:38.910476 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:48:38.917852 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:48:38.917939 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:48:38.924517 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:48:38.924578 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:48:38.928165 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:48:38.928216 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:48:38.934071 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:48:38.950591 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:48:38.950880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:48:38.957356 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:48:38.957609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:48:38.965740 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:48:38.965820 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:48:38.972633 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:48:38.972755 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:48:38.979351 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:48:38.979433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:48:38.987497 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:48:38.987557 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:48:38.993285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:48:38.993357 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:39.019422 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:48:39.024196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:48:39.024275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:48:39.031697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:48:39.031767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:39.038854 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:48:39.038991 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:48:39.046464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:48:39.070834 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:48:39.143801 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:48:39.084618 systemd[1]: Switching root. Jan 20 00:48:39.147487 systemd-journald[194]: Journal stopped Jan 20 00:48:40.800088 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:48:40.801566 kernel: SELinux: policy capability open_perms=1 Jan 20 00:48:40.801595 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:48:40.801622 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:48:40.801685 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:48:40.801703 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:48:40.801714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:48:40.801735 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:48:40.801750 kernel: audit: type=1403 audit(1768870119.363:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:48:40.801762 systemd[1]: Successfully loaded SELinux policy in 54.356ms. Jan 20 00:48:40.801781 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.286ms. Jan 20 00:48:40.801801 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:48:40.801821 systemd[1]: Detected virtualization kvm. Jan 20 00:48:40.801840 systemd[1]: Detected architecture x86-64. Jan 20 00:48:40.801860 systemd[1]: Detected first boot. Jan 20 00:48:40.801880 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:48:40.801897 zram_generator::config[1059]: No configuration found. Jan 20 00:48:40.801909 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:48:40.801920 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:48:40.801930 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:48:40.801941 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:48:40.801953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:48:40.801964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:48:40.801975 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:48:40.801987 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:48:40.802041 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:48:40.802053 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:48:40.802067 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:48:40.802086 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:48:40.802170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:48:40.802185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:48:40.802196 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:48:40.802207 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:48:40.802222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:48:40.802234 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:48:40.802244 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:48:40.802255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:48:40.802265 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:48:40.802284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:48:40.802303 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:48:40.802323 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:48:40.802346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:48:40.802357 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:48:40.802368 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:48:40.802379 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:48:40.802390 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:48:40.802438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:48:40.802450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:48:40.802461 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:48:40.802471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:48:40.802486 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:48:40.802496 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:48:40.802507 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:48:40.802518 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:48:40.802528 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:40.802539 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:48:40.802549 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:48:40.802560 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:48:40.802571 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:48:40.802584 systemd[1]: Reached target machines.target - Containers. Jan 20 00:48:40.802595 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:48:40.802615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:48:40.802633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:48:40.802694 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:48:40.802712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:48:40.802723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:48:40.802734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:48:40.802795 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:48:40.802817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:48:40.802836 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:48:40.802856 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:48:40.802868 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:48:40.802879 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:48:40.802889 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:48:40.802900 kernel: fuse: init (API version 7.39) Jan 20 00:48:40.802910 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:48:40.802924 kernel: ACPI: bus type drm_connector registered Jan 20 00:48:40.802935 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:48:40.802945 kernel: loop: module loaded Jan 20 00:48:40.802956 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:48:40.802966 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:48:40.802977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:48:40.803010 systemd-journald[1140]: Collecting audit messages is disabled. Jan 20 00:48:40.803033 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:48:40.803044 systemd[1]: Stopped verity-setup.service. Jan 20 00:48:40.803056 systemd-journald[1140]: Journal started Jan 20 00:48:40.803076 systemd-journald[1140]: Runtime Journal (/run/log/journal/82118951f10446948682d76430eb6e72) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:48:40.196355 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:48:40.221476 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:48:40.222213 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:48:40.222581 systemd[1]: systemd-journald.service: Consumed 2.086s CPU time. Jan 20 00:48:40.815219 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:40.821864 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:48:40.826630 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:48:40.831223 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:48:40.835891 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:48:40.839958 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:48:40.844541 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:48:40.849242 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:48:40.853486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:48:40.858859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:48:40.866173 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:48:40.866467 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:48:40.871680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:48:40.871915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:48:41.031860 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:48:41.032155 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:48:41.036930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:48:41.037221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:48:41.042531 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:48:41.042794 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:48:41.047482 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:48:41.047726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:48:41.052620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:48:41.057714 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:48:41.065857 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:48:41.086833 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:48:41.103383 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:48:41.109737 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:48:41.114618 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:48:41.114717 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:48:41.120396 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:48:41.127591 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:48:41.133638 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:48:41.136919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:48:41.138252 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:48:41.148234 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:48:41.152609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:48:41.156294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:48:41.161398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:48:41.164083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:48:41.180322 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:48:41.184459 systemd-journald[1140]: Time spent on flushing to /var/log/journal/82118951f10446948682d76430eb6e72 is 103.903ms for 946 entries. Jan 20 00:48:41.184459 systemd-journald[1140]: System Journal (/var/log/journal/82118951f10446948682d76430eb6e72) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:48:41.321861 systemd-journald[1140]: Received client request to flush runtime journal. Jan 20 00:48:41.188949 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:48:41.195037 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:48:41.199469 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:48:41.205709 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:48:41.325982 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:48:41.333786 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:48:41.339582 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:48:41.354169 kernel: loop0: detected capacity change from 0 to 142488 Jan 20 00:48:41.354400 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:48:41.360174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:48:41.385939 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:48:41.406169 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:48:41.397763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:48:41.408586 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:48:41.421926 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:48:41.423382 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:48:41.439552 kernel: loop1: detected capacity change from 0 to 140768 Jan 20 00:48:41.445479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:48:41.453802 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:48:41.671217 kernel: loop2: detected capacity change from 0 to 229808 Jan 20 00:48:41.723191 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 20 00:48:41.723209 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 20 00:48:41.731168 kernel: loop3: detected capacity change from 0 to 142488 Jan 20 00:48:41.743332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:48:41.775181 kernel: loop4: detected capacity change from 0 to 140768 Jan 20 00:48:41.800186 kernel: loop5: detected capacity change from 0 to 229808 Jan 20 00:48:41.807498 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:48:41.817825 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 20 00:48:41.825506 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:48:41.825539 systemd[1]: Reloading... Jan 20 00:48:42.099234 zram_generator::config[1223]: No configuration found. Jan 20 00:48:42.438562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:48:42.470194 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:48:42.494481 systemd[1]: Reloading finished in 668 ms. Jan 20 00:48:42.536410 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:48:42.540536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:48:42.564025 systemd[1]: Starting ensure-sysext.service... Jan 20 00:48:42.570825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:48:42.588757 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:48:42.588776 systemd[1]: Reloading... Jan 20 00:48:42.829181 zram_generator::config[1288]: No configuration found. Jan 20 00:48:42.839854 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:48:42.840528 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:48:42.842809 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:48:42.844192 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 20 00:48:42.844329 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 20 00:48:42.849693 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:48:42.849732 systemd-tmpfiles[1261]: Skipping /boot Jan 20 00:48:42.867366 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:48:42.867421 systemd-tmpfiles[1261]: Skipping /boot Jan 20 00:48:42.976068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:48:43.028249 systemd[1]: Reloading finished in 438 ms. Jan 20 00:48:43.185764 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:48:43.201937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:48:43.220732 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:48:43.234554 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:48:43.242274 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:48:43.252351 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:48:43.265405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:48:43.285304 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:48:43.306468 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:48:43.311887 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:48:43.316425 augenrules[1348]: No rules Jan 20 00:48:43.317901 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:48:43.329243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:43.329534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:48:43.339582 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:48:43.348976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:48:43.356434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:48:43.359063 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Jan 20 00:48:43.363613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:48:43.367776 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:48:43.372049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:43.374845 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:48:43.382548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:48:43.382831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:48:43.388694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:48:43.389013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:48:43.395217 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:48:43.395386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:48:43.404526 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:48:43.413259 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:48:43.418946 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:48:43.439542 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:48:43.453033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:43.453796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:48:43.457520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:48:43.474550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:48:43.484079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:48:43.489688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:48:43.496205 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:48:43.501302 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:48:43.501451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:43.503768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:48:43.503998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:48:43.513753 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:48:43.515201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:48:43.562544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:48:43.563011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:48:43.571317 systemd[1]: Finished ensure-sysext.service. Jan 20 00:48:43.586389 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:48:43.591416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:43.591560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:48:43.600426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:48:43.608623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:48:43.615075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:48:43.620796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:48:43.620849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:48:43.627383 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:48:43.632060 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:48:43.632090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:43.632602 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:48:43.632841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:48:43.664557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:48:43.665054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:48:43.671878 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:48:43.672456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:48:43.674018 systemd-resolved[1336]: Positive Trust Anchors: Jan 20 00:48:43.674070 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:48:43.674228 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:48:43.682186 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:48:43.689146 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:48:43.697210 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:48:43.700022 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 20 00:48:43.721632 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:48:43.737209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1370) Jan 20 00:48:43.734337 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:48:43.959208 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:48:43.959836 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:48:43.960305 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:48:43.969710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:43.989217 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:48:43.992597 systemd-networkd[1392]: lo: Link UP Jan 20 00:48:43.992628 systemd-networkd[1392]: lo: Gained carrier Jan 20 00:48:43.995203 systemd-networkd[1392]: Enumeration completed Jan 20 00:48:43.999297 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:48:44.003630 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:48:44.008051 systemd[1]: Reached target network.target - Network. Jan 20 00:48:44.008553 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:44.008558 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:48:44.011995 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:44.012090 systemd-networkd[1392]: eth0: Link UP Jan 20 00:48:44.012154 systemd-networkd[1392]: eth0: Gained carrier Jan 20 00:48:44.012168 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:44.012523 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:48:44.028412 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:48:44.030210 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:48:44.031920 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 20 00:48:44.034419 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:48:44.036289 systemd-timesyncd[1402]: Initial clock synchronization to Tue 2026-01-20 00:48:43.810213 UTC. Jan 20 00:48:44.050240 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:48:44.063841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:48:44.081349 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:48:44.336587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:48:44.346250 kernel: kvm_amd: TSC scaling supported Jan 20 00:48:44.346356 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:48:44.346371 kernel: kvm_amd: Nested Paging enabled Jan 20 00:48:44.349507 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:48:44.349532 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:48:44.423175 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:48:44.457733 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:48:44.579240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:44.599628 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:48:44.613476 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:48:44.653749 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:48:44.659384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:48:44.666468 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:48:44.670255 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:48:44.674687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:48:44.678903 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:48:44.682323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:48:44.687071 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:48:44.691858 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:48:44.691939 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:48:44.695034 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:48:44.699500 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:48:44.706317 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:48:44.718225 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:48:44.724481 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:48:44.729172 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:48:44.733237 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:48:44.737200 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:48:44.740482 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:48:44.740557 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:48:44.750340 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:48:44.755865 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:48:44.760954 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:48:44.770972 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:48:44.771392 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:48:44.775371 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:48:44.780386 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:48:44.785967 jq[1435]: false Jan 20 00:48:44.789202 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:48:44.794077 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:48:44.801261 extend-filesystems[1436]: Found loop3 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found loop4 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found loop5 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found sr0 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda1 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda2 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda3 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found usr Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda4 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda6 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda7 Jan 20 00:48:44.804193 extend-filesystems[1436]: Found vda9 Jan 20 00:48:44.804193 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 20 00:48:44.913768 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1382) Jan 20 00:48:44.913920 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:48:44.810457 dbus-daemon[1434]: [system] SELinux support is enabled Jan 20 00:48:44.820457 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:48:44.947948 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 20 00:48:44.838351 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:48:45.002037 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:48:45.071961 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:48:44.841471 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:48:44.842323 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:48:44.845464 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:48:45.112711 update_engine[1450]: I20260120 00:48:44.918932 1450 main.cc:92] Flatcar Update Engine starting Jan 20 00:48:45.112711 update_engine[1450]: I20260120 00:48:45.036302 1450 update_check_scheduler.cc:74] Next update check in 8m27s Jan 20 00:48:44.855048 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:48:45.119464 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:48:45.119464 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:48:45.119464 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:48:44.866827 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:48:45.145667 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 20 00:48:44.881443 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:48:45.163224 tar[1460]: linux-amd64/LICENSE Jan 20 00:48:45.163224 tar[1460]: linux-amd64/helm Jan 20 00:48:44.909041 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:48:44.914191 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:48:45.167315 jq[1452]: true Jan 20 00:48:44.918755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:48:44.919156 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:48:45.167668 jq[1468]: true Jan 20 00:48:45.039842 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:48:45.040161 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:48:45.065619 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:48:45.082213 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:48:45.095639 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:48:45.095671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:48:45.102449 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:48:45.102470 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:48:45.118405 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:48:45.132007 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:48:45.132291 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:48:45.145656 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:48:45.145686 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:48:45.156378 systemd-logind[1447]: New seat seat0. Jan 20 00:48:45.164718 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:48:45.208357 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:48:45.426257 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:48:45.456344 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:48:45.456554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:48:45.464776 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:48:45.532738 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:48:45.643867 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:48:45.658365 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:48:45.658653 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:48:45.683573 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:48:45.705673 systemd-networkd[1392]: eth0: Gained IPv6LL Jan 20 00:48:45.712592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:48:45.718064 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:48:45.823400 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:48:45.852418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:48:45.864666 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:48:45.869171 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:48:45.892615 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:48:45.899396 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:48:45.903054 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:48:46.140552 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:48:46.140908 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:48:46.148559 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:48:46.169471 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:48:46.830438 containerd[1464]: time="2026-01-20T00:48:46.829846333Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:48:46.893528 containerd[1464]: time="2026-01-20T00:48:46.893433116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.897646 containerd[1464]: time="2026-01-20T00:48:46.897597375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:48:46.898662 containerd[1464]: time="2026-01-20T00:48:46.897709562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:48:46.898662 containerd[1464]: time="2026-01-20T00:48:46.897778180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:48:46.898662 containerd[1464]: time="2026-01-20T00:48:46.898246867Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:48:46.898662 containerd[1464]: time="2026-01-20T00:48:46.898330888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.898662 containerd[1464]: time="2026-01-20T00:48:46.898502838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:48:46.898662 containerd[1464]: time="2026-01-20T00:48:46.898524320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.899206 containerd[1464]: time="2026-01-20T00:48:46.899177565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:48:46.899387 containerd[1464]: time="2026-01-20T00:48:46.899363129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.899496 containerd[1464]: time="2026-01-20T00:48:46.899469629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:48:46.899598 containerd[1464]: time="2026-01-20T00:48:46.899567263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.900266 containerd[1464]: time="2026-01-20T00:48:46.900152193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.900859 containerd[1464]: time="2026-01-20T00:48:46.900795313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:48:46.901289 containerd[1464]: time="2026-01-20T00:48:46.901235412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:48:46.901289 containerd[1464]: time="2026-01-20T00:48:46.901281717Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:48:46.901551 containerd[1464]: time="2026-01-20T00:48:46.901495586Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:48:46.901689 containerd[1464]: time="2026-01-20T00:48:46.901650043Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:48:46.909494 containerd[1464]: time="2026-01-20T00:48:46.909466028Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:48:46.910233 containerd[1464]: time="2026-01-20T00:48:46.910068354Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:48:46.910625 containerd[1464]: time="2026-01-20T00:48:46.910440765Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:48:46.910898 containerd[1464]: time="2026-01-20T00:48:46.910879320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:48:46.911366 containerd[1464]: time="2026-01-20T00:48:46.911347411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:48:46.912062 containerd[1464]: time="2026-01-20T00:48:46.912036631Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:48:46.912673 containerd[1464]: time="2026-01-20T00:48:46.912652668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:48:46.913227 containerd[1464]: time="2026-01-20T00:48:46.913205463Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:48:46.913314 containerd[1464]: time="2026-01-20T00:48:46.913299912Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:48:46.913411 containerd[1464]: time="2026-01-20T00:48:46.913392572Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:48:46.913524 containerd[1464]: time="2026-01-20T00:48:46.913474238Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913580 containerd[1464]: time="2026-01-20T00:48:46.913568559Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913650 containerd[1464]: time="2026-01-20T00:48:46.913637129Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913696 containerd[1464]: time="2026-01-20T00:48:46.913685770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913739 containerd[1464]: time="2026-01-20T00:48:46.913728811Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913779 containerd[1464]: time="2026-01-20T00:48:46.913769126Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913832 containerd[1464]: time="2026-01-20T00:48:46.913819019Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.913921 containerd[1464]: time="2026-01-20T00:48:46.913864142Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:48:46.914040 containerd[1464]: time="2026-01-20T00:48:46.914025068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914149 containerd[1464]: time="2026-01-20T00:48:46.914133796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914336 containerd[1464]: time="2026-01-20T00:48:46.914318041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914402 containerd[1464]: time="2026-01-20T00:48:46.914390413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914447 containerd[1464]: time="2026-01-20T00:48:46.914436298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914519 containerd[1464]: time="2026-01-20T00:48:46.914506225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914564 containerd[1464]: time="2026-01-20T00:48:46.914553988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914605 containerd[1464]: time="2026-01-20T00:48:46.914594809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914723 containerd[1464]: time="2026-01-20T00:48:46.914711053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914773 containerd[1464]: time="2026-01-20T00:48:46.914762392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914904 containerd[1464]: time="2026-01-20T00:48:46.914826309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.914970 containerd[1464]: time="2026-01-20T00:48:46.914957515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.915012 containerd[1464]: time="2026-01-20T00:48:46.915002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.915166 containerd[1464]: time="2026-01-20T00:48:46.915150634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:48:46.915284 containerd[1464]: time="2026-01-20T00:48:46.915269038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915321305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915334607Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915488399Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915539474Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915554564Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915570465Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915584548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915770328Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915859743Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:48:46.918155 containerd[1464]: time="2026-01-20T00:48:46.915872938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:48:46.918338 containerd[1464]: time="2026-01-20T00:48:46.916862187Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:48:46.918338 containerd[1464]: time="2026-01-20T00:48:46.916960310Z" level=info msg="Connect containerd service" Jan 20 00:48:46.918338 containerd[1464]: time="2026-01-20T00:48:46.917065236Z" level=info msg="using legacy CRI server" Jan 20 00:48:46.918338 containerd[1464]: time="2026-01-20T00:48:46.917147468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:48:46.918338 containerd[1464]: time="2026-01-20T00:48:46.917554074Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:48:46.921806 containerd[1464]: time="2026-01-20T00:48:46.921721694Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:48:46.922207 containerd[1464]: time="2026-01-20T00:48:46.922032493Z" level=info msg="Start subscribing containerd event" Jan 20 00:48:46.922311 containerd[1464]: time="2026-01-20T00:48:46.922272720Z" level=info msg="Start recovering state" Jan 20 00:48:46.922494 containerd[1464]: time="2026-01-20T00:48:46.922451151Z" level=info msg="Start event monitor" Jan 20 00:48:46.922537 containerd[1464]: time="2026-01-20T00:48:46.922500407Z" level=info msg="Start snapshots syncer" Jan 20 00:48:46.922537 containerd[1464]: time="2026-01-20T00:48:46.922529258Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:48:46.922537 containerd[1464]: time="2026-01-20T00:48:46.922536589Z" level=info msg="Start streaming server" Jan 20 00:48:46.924402 containerd[1464]: time="2026-01-20T00:48:46.924316964Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:48:46.924402 containerd[1464]: time="2026-01-20T00:48:46.924392598Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:48:46.924662 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:48:46.926486 containerd[1464]: time="2026-01-20T00:48:46.926374987Z" level=info msg="containerd successfully booted in 0.099158s" Jan 20 00:48:47.130618 tar[1460]: linux-amd64/README.md Jan 20 00:48:47.153200 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:48:48.774660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:48:48.779076 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:48:48.781251 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:48:48.783158 systemd[1]: Startup finished in 1.190s (kernel) + 11.607s (initrd) + 9.473s (userspace) = 22.270s. Jan 20 00:48:49.265730 kubelet[1547]: E0120 00:48:49.265535 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:48:49.269644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:48:49.269863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:48:49.270372 systemd[1]: kubelet.service: Consumed 3.121s CPU time. Jan 20 00:48:54.546297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:48:54.548508 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:46716.service - OpenSSH per-connection server daemon (10.0.0.1:46716). Jan 20 00:48:54.864272 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 46716 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:54.866764 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:54.879551 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:48:54.889541 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:48:54.892634 systemd-logind[1447]: New session 1 of user core. Jan 20 00:48:54.911274 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:48:54.930586 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:48:54.934645 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:48:55.047806 systemd[1564]: Queued start job for default target default.target. Jan 20 00:48:55.060487 systemd[1564]: Created slice app.slice - User Application Slice. Jan 20 00:48:55.060540 systemd[1564]: Reached target paths.target - Paths. Jan 20 00:48:55.060589 systemd[1564]: Reached target timers.target - Timers. Jan 20 00:48:55.062345 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:48:55.075925 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:48:55.076078 systemd[1564]: Reached target sockets.target - Sockets. Jan 20 00:48:55.076179 systemd[1564]: Reached target basic.target - Basic System. Jan 20 00:48:55.076220 systemd[1564]: Reached target default.target - Main User Target. Jan 20 00:48:55.076261 systemd[1564]: Startup finished in 133ms. Jan 20 00:48:55.076383 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:48:55.078398 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:48:55.141541 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:46720.service - OpenSSH per-connection server daemon (10.0.0.1:46720). Jan 20 00:48:55.197985 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 46720 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:55.199886 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:55.206780 systemd-logind[1447]: New session 2 of user core. Jan 20 00:48:55.216334 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:48:55.276359 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 20 00:48:55.288216 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:46720.service: Deactivated successfully. Jan 20 00:48:55.290266 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:48:55.292492 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:48:55.294357 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:46728.service - OpenSSH per-connection server daemon (10.0.0.1:46728). Jan 20 00:48:55.295758 systemd-logind[1447]: Removed session 2. Jan 20 00:48:55.333801 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 46728 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:55.335779 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:55.341218 systemd-logind[1447]: New session 3 of user core. Jan 20 00:48:55.350348 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:48:55.405559 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 20 00:48:55.413078 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:46728.service: Deactivated successfully. Jan 20 00:48:55.415000 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:48:55.416593 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:48:55.427421 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:46736.service - OpenSSH per-connection server daemon (10.0.0.1:46736). Jan 20 00:48:55.429008 systemd-logind[1447]: Removed session 3. Jan 20 00:48:55.464611 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 46736 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:55.466668 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:55.472419 systemd-logind[1447]: New session 4 of user core. Jan 20 00:48:55.482268 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:48:55.541902 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 20 00:48:55.559156 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:46736.service: Deactivated successfully. Jan 20 00:48:55.560722 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:48:55.562465 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:48:55.570560 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:46744.service - OpenSSH per-connection server daemon (10.0.0.1:46744). Jan 20 00:48:55.571745 systemd-logind[1447]: Removed session 4. Jan 20 00:48:55.609007 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 46744 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:55.610835 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:55.616142 systemd-logind[1447]: New session 5 of user core. Jan 20 00:48:55.626454 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:48:55.689519 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:48:55.690066 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:48:55.717713 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 20 00:48:55.720188 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 20 00:48:55.735852 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:46744.service: Deactivated successfully. Jan 20 00:48:55.737458 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:48:55.739028 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:48:55.751391 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:46752.service - OpenSSH per-connection server daemon (10.0.0.1:46752). Jan 20 00:48:55.752391 systemd-logind[1447]: Removed session 5. Jan 20 00:48:55.785606 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 46752 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:55.787240 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:55.792667 systemd-logind[1447]: New session 6 of user core. Jan 20 00:48:55.806352 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:48:55.878577 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:48:55.878980 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:48:55.886538 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 20 00:48:55.894663 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:48:55.895018 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:48:55.923449 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:48:55.929646 auditctl[1611]: No rules Jan 20 00:48:55.931777 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:48:55.932174 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:48:55.935420 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:48:56.045710 augenrules[1629]: No rules Jan 20 00:48:56.048611 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:48:56.051026 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 20 00:48:56.056904 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 20 00:48:56.077181 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:46752.service: Deactivated successfully. Jan 20 00:48:56.079195 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:48:56.081193 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:48:56.091491 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:46768.service - OpenSSH per-connection server daemon (10.0.0.1:46768). Jan 20 00:48:56.093599 systemd-logind[1447]: Removed session 6. Jan 20 00:48:56.132804 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 46768 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:48:56.135216 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:48:56.141733 systemd-logind[1447]: New session 7 of user core. Jan 20 00:48:56.159384 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:48:56.218278 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:48:56.218733 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:48:57.687594 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:48:57.687745 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:48:59.523648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:48:59.622395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:48:59.663286 dockerd[1660]: time="2026-01-20T00:48:59.662979520Z" level=info msg="Starting up" Jan 20 00:49:00.129570 systemd[1]: var-lib-docker-metacopy\x2dcheck507796857-merged.mount: Deactivated successfully. Jan 20 00:49:00.160709 dockerd[1660]: time="2026-01-20T00:49:00.160425574Z" level=info msg="Loading containers: start." Jan 20 00:49:00.361219 kernel: Initializing XFRM netlink socket Jan 20 00:49:00.611962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:00.612942 systemd-networkd[1392]: docker0: Link UP Jan 20 00:49:00.620569 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:49:00.694534 kubelet[1763]: E0120 00:49:00.694428 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:49:00.701811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:49:00.702006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:49:00.743017 dockerd[1660]: time="2026-01-20T00:49:00.742935787Z" level=info msg="Loading containers: done." Jan 20 00:49:00.761336 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck378853319-merged.mount: Deactivated successfully. Jan 20 00:49:00.773311 dockerd[1660]: time="2026-01-20T00:49:00.773190953Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:49:00.773447 dockerd[1660]: time="2026-01-20T00:49:00.773406329Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:49:00.773714 dockerd[1660]: time="2026-01-20T00:49:00.773656678Z" level=info msg="Daemon has completed initialization" Jan 20 00:49:00.869089 dockerd[1660]: time="2026-01-20T00:49:00.868782114Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:49:00.869679 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:49:01.720793 containerd[1464]: time="2026-01-20T00:49:01.720657257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 00:49:02.227422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226432712.mount: Deactivated successfully. Jan 20 00:49:04.527147 containerd[1464]: time="2026-01-20T00:49:04.526830474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:04.528349 containerd[1464]: time="2026-01-20T00:49:04.527509633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 00:49:04.528956 containerd[1464]: time="2026-01-20T00:49:04.528915608Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:04.532621 containerd[1464]: time="2026-01-20T00:49:04.532527692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:04.533660 containerd[1464]: time="2026-01-20T00:49:04.533601927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.81282805s" Jan 20 00:49:04.533660 containerd[1464]: time="2026-01-20T00:49:04.533653160Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 00:49:04.536532 containerd[1464]: time="2026-01-20T00:49:04.536504166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 00:49:08.364577 containerd[1464]: time="2026-01-20T00:49:08.364224387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:08.365858 containerd[1464]: time="2026-01-20T00:49:08.365060281Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 00:49:08.366897 containerd[1464]: time="2026-01-20T00:49:08.366811051Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:08.373276 containerd[1464]: time="2026-01-20T00:49:08.373201156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:08.375222 containerd[1464]: time="2026-01-20T00:49:08.375167622Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.8386292s" Jan 20 00:49:08.375222 containerd[1464]: time="2026-01-20T00:49:08.375214728Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 00:49:08.377378 containerd[1464]: time="2026-01-20T00:49:08.377066188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 00:49:10.203074 containerd[1464]: time="2026-01-20T00:49:10.202762437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:10.204422 containerd[1464]: time="2026-01-20T00:49:10.203575114Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 00:49:10.204860 containerd[1464]: time="2026-01-20T00:49:10.204799517Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:10.207893 containerd[1464]: time="2026-01-20T00:49:10.207829016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:10.211384 containerd[1464]: time="2026-01-20T00:49:10.211345901Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.834244921s" Jan 20 00:49:10.211448 containerd[1464]: time="2026-01-20T00:49:10.211384675Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 00:49:10.214463 containerd[1464]: time="2026-01-20T00:49:10.214410289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 00:49:10.755509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:49:10.766220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:11.320866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:11.336781 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:49:11.426244 kubelet[1901]: E0120 00:49:11.426176 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:49:11.429891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:49:11.430270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:49:11.755563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076760935.mount: Deactivated successfully. Jan 20 00:49:12.214748 containerd[1464]: time="2026-01-20T00:49:12.214544126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:12.216036 containerd[1464]: time="2026-01-20T00:49:12.215954430Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 00:49:12.217204 containerd[1464]: time="2026-01-20T00:49:12.217169135Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:12.219706 containerd[1464]: time="2026-01-20T00:49:12.219648599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:12.220590 containerd[1464]: time="2026-01-20T00:49:12.220507919Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.006045912s" Jan 20 00:49:12.220590 containerd[1464]: time="2026-01-20T00:49:12.220555121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 00:49:12.221535 containerd[1464]: time="2026-01-20T00:49:12.221470174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 00:49:12.645905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056073281.mount: Deactivated successfully. Jan 20 00:49:13.634505 containerd[1464]: time="2026-01-20T00:49:13.634372586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:13.635087 containerd[1464]: time="2026-01-20T00:49:13.634994508Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 00:49:13.636591 containerd[1464]: time="2026-01-20T00:49:13.636516328Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:13.641264 containerd[1464]: time="2026-01-20T00:49:13.641194700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:13.643352 containerd[1464]: time="2026-01-20T00:49:13.643266982Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.421773962s" Jan 20 00:49:13.643352 containerd[1464]: time="2026-01-20T00:49:13.643307510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 00:49:13.643893 containerd[1464]: time="2026-01-20T00:49:13.643832101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:49:14.060770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579252930.mount: Deactivated successfully. Jan 20 00:49:14.069209 containerd[1464]: time="2026-01-20T00:49:14.069032249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:14.070220 containerd[1464]: time="2026-01-20T00:49:14.070137952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:49:14.071647 containerd[1464]: time="2026-01-20T00:49:14.071576321Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:14.074639 containerd[1464]: time="2026-01-20T00:49:14.074598966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:14.075807 containerd[1464]: time="2026-01-20T00:49:14.075618787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 431.748889ms" Jan 20 00:49:14.075807 containerd[1464]: time="2026-01-20T00:49:14.075667088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:49:14.076617 containerd[1464]: time="2026-01-20T00:49:14.076301724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 00:49:14.525892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766048292.mount: Deactivated successfully. Jan 20 00:49:17.066274 containerd[1464]: time="2026-01-20T00:49:17.065859134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:17.067691 containerd[1464]: time="2026-01-20T00:49:17.066863597Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 00:49:17.068442 containerd[1464]: time="2026-01-20T00:49:17.068394447Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:17.072641 containerd[1464]: time="2026-01-20T00:49:17.072582374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:17.075014 containerd[1464]: time="2026-01-20T00:49:17.074909537Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.998562567s" Jan 20 00:49:17.075075 containerd[1464]: time="2026-01-20T00:49:17.075026950Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 00:49:21.222388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:21.232537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:21.267003 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit session-7.scope)... Jan 20 00:49:21.267039 systemd[1]: Reloading... Jan 20 00:49:21.387315 zram_generator::config[2092]: No configuration found. Jan 20 00:49:21.526045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:49:21.612660 systemd[1]: Reloading finished in 345 ms. Jan 20 00:49:21.670801 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:49:21.670928 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:49:21.671363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:21.673412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:21.891342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:21.891713 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:49:22.274647 kubelet[2143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:49:22.274647 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:49:22.274647 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:49:22.275989 kubelet[2143]: I0120 00:49:22.274979 2143 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:49:23.017298 kubelet[2143]: I0120 00:49:23.016998 2143 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:49:23.017298 kubelet[2143]: I0120 00:49:23.017082 2143 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:49:23.018270 kubelet[2143]: I0120 00:49:23.017866 2143 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:49:23.158517 kubelet[2143]: E0120 00:49:23.158067 2143 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:49:23.159937 kubelet[2143]: I0120 00:49:23.158839 2143 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:49:23.171008 kubelet[2143]: E0120 00:49:23.170918 2143 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:49:23.171008 kubelet[2143]: I0120 00:49:23.170961 2143 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:49:23.181724 kubelet[2143]: I0120 00:49:23.181673 2143 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:49:23.182191 kubelet[2143]: I0120 00:49:23.182027 2143 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:49:23.182418 kubelet[2143]: I0120 00:49:23.182088 2143 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:49:23.182418 kubelet[2143]: I0120 00:49:23.182363 2143 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:49:23.182418 kubelet[2143]: I0120 00:49:23.182373 2143 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:49:23.183511 kubelet[2143]: I0120 00:49:23.183413 2143 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:49:23.188271 kubelet[2143]: I0120 00:49:23.188189 2143 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:49:23.188271 kubelet[2143]: I0120 00:49:23.188232 2143 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:49:23.188361 kubelet[2143]: I0120 00:49:23.188305 2143 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:49:23.188361 kubelet[2143]: I0120 00:49:23.188335 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:49:23.206670 kubelet[2143]: I0120 00:49:23.206559 2143 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:49:23.207883 kubelet[2143]: E0120 00:49:23.206853 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:49:23.207883 kubelet[2143]: E0120 00:49:23.206988 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:49:23.207883 kubelet[2143]: I0120 00:49:23.207387 2143 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:49:23.214592 kubelet[2143]: W0120 00:49:23.213030 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:49:23.226206 kubelet[2143]: I0120 00:49:23.225854 2143 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:49:23.226206 kubelet[2143]: I0120 00:49:23.226282 2143 server.go:1289] "Started kubelet" Jan 20 00:49:23.231675 kubelet[2143]: I0120 00:49:23.226581 2143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:49:23.331989 kubelet[2143]: I0120 00:49:23.268354 2143 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:49:23.344893 kubelet[2143]: I0120 00:49:23.344179 2143 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:49:23.344893 kubelet[2143]: I0120 00:49:23.344557 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:49:23.345378 kubelet[2143]: I0120 00:49:23.345362 2143 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:49:23.347219 kubelet[2143]: I0120 00:49:23.346921 2143 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:49:23.347482 kubelet[2143]: I0120 00:49:23.347467 2143 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:49:23.348055 kubelet[2143]: E0120 00:49:23.348017 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:49:23.350451 kubelet[2143]: E0120 00:49:23.346876 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4a0aa4e08297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:49:23.226067607 +0000 UTC m=+1.174438306,LastTimestamp:2026-01-20 00:49:23.226067607 +0000 UTC m=+1.174438306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:49:23.350669 kubelet[2143]: I0120 00:49:23.350629 2143 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:49:23.350923 kubelet[2143]: I0120 00:49:23.350808 2143 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:49:23.351408 kubelet[2143]: I0120 00:49:23.351294 2143 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:49:23.351408 kubelet[2143]: I0120 00:49:23.351386 2143 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:49:23.351679 kubelet[2143]: E0120 00:49:23.351590 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Jan 20 00:49:23.351803 kubelet[2143]: E0120 00:49:23.351742 2143 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:49:23.352406 kubelet[2143]: E0120 00:49:23.352310 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:49:23.353406 kubelet[2143]: I0120 00:49:23.353307 2143 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:49:23.375044 kubelet[2143]: I0120 00:49:23.374906 2143 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:49:23.454844 kubelet[2143]: E0120 00:49:23.454678 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:49:23.468731 kubelet[2143]: I0120 00:49:23.468645 2143 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:49:23.468731 kubelet[2143]: I0120 00:49:23.468686 2143 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:49:23.468731 kubelet[2143]: I0120 00:49:23.468731 2143 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:49:23.473290 kubelet[2143]: I0120 00:49:23.473239 2143 policy_none.go:49] "None policy: Start" Jan 20 00:49:23.473290 kubelet[2143]: I0120 00:49:23.473282 2143 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:49:23.473372 kubelet[2143]: I0120 00:49:23.473297 2143 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:49:23.474738 kubelet[2143]: I0120 00:49:23.474692 2143 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:49:23.474795 kubelet[2143]: I0120 00:49:23.474754 2143 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:49:23.474854 kubelet[2143]: I0120 00:49:23.474814 2143 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:49:23.474895 kubelet[2143]: I0120 00:49:23.474877 2143 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:49:23.475039 kubelet[2143]: E0120 00:49:23.474955 2143 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:49:23.479733 kubelet[2143]: E0120 00:49:23.479673 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:49:23.485603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:49:23.505392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:49:23.520466 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:49:23.535387 kubelet[2143]: E0120 00:49:23.535318 2143 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:49:23.535676 kubelet[2143]: I0120 00:49:23.535634 2143 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:49:23.535722 kubelet[2143]: I0120 00:49:23.535674 2143 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:49:23.541562 kubelet[2143]: I0120 00:49:23.536228 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:49:23.541562 kubelet[2143]: E0120 00:49:23.541385 2143 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:49:23.541562 kubelet[2143]: E0120 00:49:23.541461 2143 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:49:23.555425 kubelet[2143]: E0120 00:49:23.554823 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Jan 20 00:49:23.654725 kubelet[2143]: I0120 00:49:23.654232 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:49:23.655210 kubelet[2143]: I0120 00:49:23.654758 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:23.655210 kubelet[2143]: I0120 00:49:23.654793 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:23.655210 kubelet[2143]: E0120 00:49:23.654798 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 20 00:49:23.655210 kubelet[2143]: I0120 00:49:23.654814 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e3e827b001171d71753a1f711ebe65f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3e827b001171d71753a1f711ebe65f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:23.655210 kubelet[2143]: I0120 00:49:23.654834 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e3e827b001171d71753a1f711ebe65f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e3e827b001171d71753a1f711ebe65f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:23.655348 kubelet[2143]: I0120 00:49:23.654903 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:23.655348 kubelet[2143]: I0120 00:49:23.654921 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:23.655348 kubelet[2143]: I0120 00:49:23.654939 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:23.655348 kubelet[2143]: I0120 00:49:23.654994 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e3e827b001171d71753a1f711ebe65f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3e827b001171d71753a1f711ebe65f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:23.655348 kubelet[2143]: I0120 00:49:23.655035 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:23.670249 systemd[1]: Created slice kubepods-burstable-pod5e3e827b001171d71753a1f711ebe65f.slice - libcontainer container kubepods-burstable-pod5e3e827b001171d71753a1f711ebe65f.slice. Jan 20 00:49:23.700071 kubelet[2143]: E0120 00:49:23.699693 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:23.711699 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 00:49:23.717339 kubelet[2143]: E0120 00:49:23.717091 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:23.720925 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 00:49:23.730298 kubelet[2143]: E0120 00:49:23.730237 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:23.789334 kubelet[2143]: E0120 00:49:23.788862 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4a0aa4e08297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:49:23.226067607 +0000 UTC m=+1.174438306,LastTimestamp:2026-01-20 00:49:23.226067607 +0000 UTC m=+1.174438306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:49:23.860201 kubelet[2143]: I0120 00:49:23.859973 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:49:23.860625 kubelet[2143]: E0120 00:49:23.860523 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 20 00:49:23.956040 kubelet[2143]: E0120 00:49:23.955773 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Jan 20 00:49:24.001682 kubelet[2143]: E0120 00:49:24.001604 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:24.003191 containerd[1464]: time="2026-01-20T00:49:24.002929439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e3e827b001171d71753a1f711ebe65f,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:24.018225 kubelet[2143]: E0120 00:49:24.018068 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:24.018982 containerd[1464]: time="2026-01-20T00:49:24.018861074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:24.032710 kubelet[2143]: E0120 00:49:24.032558 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:24.033350 containerd[1464]: time="2026-01-20T00:49:24.033285079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:24.036386 kubelet[2143]: E0120 00:49:24.036303 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:49:24.236289 kubelet[2143]: E0120 00:49:24.235997 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:49:24.263530 kubelet[2143]: I0120 00:49:24.263499 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:49:24.264074 kubelet[2143]: E0120 00:49:24.263999 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 20 00:49:24.305314 kubelet[2143]: E0120 00:49:24.305078 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:49:24.687304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416708504.mount: Deactivated successfully. Jan 20 00:49:24.696756 containerd[1464]: time="2026-01-20T00:49:24.696622911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:24.699377 containerd[1464]: time="2026-01-20T00:49:24.699305462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:49:24.701318 containerd[1464]: time="2026-01-20T00:49:24.701211793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:24.702535 containerd[1464]: time="2026-01-20T00:49:24.702455796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:24.703730 containerd[1464]: time="2026-01-20T00:49:24.703669241Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:24.704985 containerd[1464]: time="2026-01-20T00:49:24.704945123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:49:24.709857 containerd[1464]: time="2026-01-20T00:49:24.709698553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:49:24.713466 containerd[1464]: time="2026-01-20T00:49:24.713362428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:24.714437 containerd[1464]: time="2026-01-20T00:49:24.714365511Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 681.001321ms" Jan 20 00:49:24.716465 containerd[1464]: time="2026-01-20T00:49:24.716419176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 697.452332ms" Jan 20 00:49:24.719732 containerd[1464]: time="2026-01-20T00:49:24.719686218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.420337ms" Jan 20 00:49:24.745249 kubelet[2143]: E0120 00:49:24.745064 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:49:24.757463 kubelet[2143]: E0120 00:49:24.756988 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Jan 20 00:49:25.212204 kubelet[2143]: E0120 00:49:25.211499 2143 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:49:25.218066 kubelet[2143]: I0120 00:49:25.215082 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:49:25.218066 kubelet[2143]: E0120 00:49:25.215558 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 20 00:49:25.515145 containerd[1464]: time="2026-01-20T00:49:25.514658188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:25.515145 containerd[1464]: time="2026-01-20T00:49:25.514919109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:25.515145 containerd[1464]: time="2026-01-20T00:49:25.514941292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:25.516039 containerd[1464]: time="2026-01-20T00:49:25.515627683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:25.516039 containerd[1464]: time="2026-01-20T00:49:25.515623044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:25.516039 containerd[1464]: time="2026-01-20T00:49:25.515702664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:25.516039 containerd[1464]: time="2026-01-20T00:49:25.515725247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:25.516273 containerd[1464]: time="2026-01-20T00:49:25.515850702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:25.560362 containerd[1464]: time="2026-01-20T00:49:25.557468658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:25.560362 containerd[1464]: time="2026-01-20T00:49:25.557879852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:25.560362 containerd[1464]: time="2026-01-20T00:49:25.557990770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:25.560362 containerd[1464]: time="2026-01-20T00:49:25.558428750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:25.604429 systemd[1]: Started cri-containerd-6e2d10f5b1f3dd4a081a014184612c172cd3ca1fa369e60fce08cac5b8d735db.scope - libcontainer container 6e2d10f5b1f3dd4a081a014184612c172cd3ca1fa369e60fce08cac5b8d735db. Jan 20 00:49:25.798376 systemd[1]: Started cri-containerd-fc6499a5c7e456507e17ba8509150ec6c84f3f4527ad6766d4c573b647f0874b.scope - libcontainer container fc6499a5c7e456507e17ba8509150ec6c84f3f4527ad6766d4c573b647f0874b. Jan 20 00:49:25.811594 systemd[1]: Started cri-containerd-2ccdc357018f2a5024cd5330ca1cbe900c7ab1ef84bc5319c4b611b200b78b16.scope - libcontainer container 2ccdc357018f2a5024cd5330ca1cbe900c7ab1ef84bc5319c4b611b200b78b16. Jan 20 00:49:25.899671 containerd[1464]: time="2026-01-20T00:49:25.899574666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc6499a5c7e456507e17ba8509150ec6c84f3f4527ad6766d4c573b647f0874b\"" Jan 20 00:49:25.904270 kubelet[2143]: E0120 00:49:25.903601 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:25.913730 containerd[1464]: time="2026-01-20T00:49:25.913660915Z" level=info msg="CreateContainer within sandbox \"fc6499a5c7e456507e17ba8509150ec6c84f3f4527ad6766d4c573b647f0874b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:49:25.918306 containerd[1464]: time="2026-01-20T00:49:25.918259210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e3e827b001171d71753a1f711ebe65f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ccdc357018f2a5024cd5330ca1cbe900c7ab1ef84bc5319c4b611b200b78b16\"" Jan 20 00:49:25.920842 kubelet[2143]: E0120 00:49:25.920640 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:25.921522 kubelet[2143]: E0120 00:49:25.921360 2143 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:49:25.923830 containerd[1464]: time="2026-01-20T00:49:25.923799278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2d10f5b1f3dd4a081a014184612c172cd3ca1fa369e60fce08cac5b8d735db\"" Jan 20 00:49:25.925487 kubelet[2143]: E0120 00:49:25.925431 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:25.928537 containerd[1464]: time="2026-01-20T00:49:25.928356236Z" level=info msg="CreateContainer within sandbox \"2ccdc357018f2a5024cd5330ca1cbe900c7ab1ef84bc5319c4b611b200b78b16\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:49:25.931623 containerd[1464]: time="2026-01-20T00:49:25.931559434Z" level=info msg="CreateContainer within sandbox \"6e2d10f5b1f3dd4a081a014184612c172cd3ca1fa369e60fce08cac5b8d735db\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:49:25.945667 containerd[1464]: time="2026-01-20T00:49:25.945581491Z" level=info msg="CreateContainer within sandbox \"fc6499a5c7e456507e17ba8509150ec6c84f3f4527ad6766d4c573b647f0874b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b17609f4ec981313f4e389fc84b23041ae4e01e5c52d17f4932e06ffaf4bb32\"" Jan 20 00:49:25.946726 containerd[1464]: time="2026-01-20T00:49:25.946582565Z" level=info msg="StartContainer for \"9b17609f4ec981313f4e389fc84b23041ae4e01e5c52d17f4932e06ffaf4bb32\"" Jan 20 00:49:25.964437 containerd[1464]: time="2026-01-20T00:49:25.963885511Z" level=info msg="CreateContainer within sandbox \"6e2d10f5b1f3dd4a081a014184612c172cd3ca1fa369e60fce08cac5b8d735db\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eec9b0976434d7b3e6af63605ab76de326a8cb93bec4af4c8720f11e655f6e6a\"" Jan 20 00:49:25.964701 containerd[1464]: time="2026-01-20T00:49:25.964646116Z" level=info msg="CreateContainer within sandbox \"2ccdc357018f2a5024cd5330ca1cbe900c7ab1ef84bc5319c4b611b200b78b16\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e62ccc621a207f3958697f29fcf7637ecbdb4c3d7c4d7dcab69c7175239c4ca5\"" Jan 20 00:49:25.964799 containerd[1464]: time="2026-01-20T00:49:25.964755078Z" level=info msg="StartContainer for \"eec9b0976434d7b3e6af63605ab76de326a8cb93bec4af4c8720f11e655f6e6a\"" Jan 20 00:49:25.965903 containerd[1464]: time="2026-01-20T00:49:25.965758430Z" level=info msg="StartContainer for \"e62ccc621a207f3958697f29fcf7637ecbdb4c3d7c4d7dcab69c7175239c4ca5\"" Jan 20 00:49:25.993631 systemd[1]: Started cri-containerd-9b17609f4ec981313f4e389fc84b23041ae4e01e5c52d17f4932e06ffaf4bb32.scope - libcontainer container 9b17609f4ec981313f4e389fc84b23041ae4e01e5c52d17f4932e06ffaf4bb32. Jan 20 00:49:26.017663 systemd[1]: Started cri-containerd-e62ccc621a207f3958697f29fcf7637ecbdb4c3d7c4d7dcab69c7175239c4ca5.scope - libcontainer container e62ccc621a207f3958697f29fcf7637ecbdb4c3d7c4d7dcab69c7175239c4ca5. Jan 20 00:49:26.023776 systemd[1]: Started cri-containerd-eec9b0976434d7b3e6af63605ab76de326a8cb93bec4af4c8720f11e655f6e6a.scope - libcontainer container eec9b0976434d7b3e6af63605ab76de326a8cb93bec4af4c8720f11e655f6e6a. Jan 20 00:49:26.092614 containerd[1464]: time="2026-01-20T00:49:26.092387182Z" level=info msg="StartContainer for \"9b17609f4ec981313f4e389fc84b23041ae4e01e5c52d17f4932e06ffaf4bb32\" returns successfully" Jan 20 00:49:26.092614 containerd[1464]: time="2026-01-20T00:49:26.092495485Z" level=info msg="StartContainer for \"e62ccc621a207f3958697f29fcf7637ecbdb4c3d7c4d7dcab69c7175239c4ca5\" returns successfully" Jan 20 00:49:26.123544 containerd[1464]: time="2026-01-20T00:49:26.123352040Z" level=info msg="StartContainer for \"eec9b0976434d7b3e6af63605ab76de326a8cb93bec4af4c8720f11e655f6e6a\" returns successfully" Jan 20 00:49:26.379296 kubelet[2143]: E0120 00:49:26.375751 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="3.2s" Jan 20 00:49:26.798079 kubelet[2143]: E0120 00:49:26.798012 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:26.799155 kubelet[2143]: E0120 00:49:26.798747 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:26.800735 kubelet[2143]: E0120 00:49:26.800684 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:26.800862 kubelet[2143]: E0120 00:49:26.800817 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:26.803549 kubelet[2143]: E0120 00:49:26.803496 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:26.804124 kubelet[2143]: E0120 00:49:26.803710 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:26.817791 kubelet[2143]: I0120 00:49:26.817731 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:49:27.815884 kubelet[2143]: E0120 00:49:27.815604 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:27.818499 kubelet[2143]: E0120 00:49:27.816464 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:27.818499 kubelet[2143]: E0120 00:49:27.817482 2143 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:49:27.818499 kubelet[2143]: E0120 00:49:27.817567 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:30.574174 update_engine[1450]: I20260120 00:49:30.573727 1450 update_attempter.cc:509] Updating boot flags... Jan 20 00:49:30.660168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2438) Jan 20 00:49:30.779160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2439) Jan 20 00:49:30.841019 kubelet[2143]: E0120 00:49:30.840776 2143 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:49:30.904343 kubelet[2143]: I0120 00:49:30.904306 2143 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:49:30.951554 kubelet[2143]: I0120 00:49:30.951491 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:30.972487 kubelet[2143]: E0120 00:49:30.971383 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:30.972487 kubelet[2143]: I0120 00:49:30.971426 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:30.975625 kubelet[2143]: E0120 00:49:30.975593 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:30.975884 kubelet[2143]: I0120 00:49:30.975753 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:30.980204 kubelet[2143]: E0120 00:49:30.979775 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:31.583005 kubelet[2143]: I0120 00:49:31.582881 2143 apiserver.go:52] "Watching apiserver" Jan 20 00:49:31.583435 kubelet[2143]: I0120 00:49:31.583398 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:31.593670 kubelet[2143]: E0120 00:49:31.593608 2143 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:31.593878 kubelet[2143]: E0120 00:49:31.593827 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:31.652074 kubelet[2143]: I0120 00:49:31.651998 2143 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:49:33.109471 kubelet[2143]: I0120 00:49:33.109326 2143 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:33.118932 kubelet[2143]: E0120 00:49:33.118807 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:33.408723 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-7.scope)... Jan 20 00:49:33.408757 systemd[1]: Reloading... Jan 20 00:49:33.512221 zram_generator::config[2490]: No configuration found. Jan 20 00:49:33.658200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:49:33.761311 systemd[1]: Reloading finished in 352 ms. Jan 20 00:49:33.826759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:33.842226 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:49:33.842844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:33.842952 systemd[1]: kubelet.service: Consumed 4.180s CPU time, 134.5M memory peak, 0B memory swap peak. Jan 20 00:49:33.854661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:34.111959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:34.118813 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:49:34.188526 kubelet[2533]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:49:34.188526 kubelet[2533]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:49:34.188526 kubelet[2533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:49:34.188888 kubelet[2533]: I0120 00:49:34.188535 2533 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:49:34.197230 kubelet[2533]: I0120 00:49:34.197091 2533 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:49:34.197230 kubelet[2533]: I0120 00:49:34.197182 2533 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:49:34.197388 kubelet[2533]: I0120 00:49:34.197363 2533 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:49:34.198576 kubelet[2533]: I0120 00:49:34.198509 2533 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 00:49:34.200568 kubelet[2533]: I0120 00:49:34.200454 2533 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:49:34.210621 kubelet[2533]: E0120 00:49:34.210521 2533 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:49:34.210728 kubelet[2533]: I0120 00:49:34.210628 2533 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:49:34.219147 kubelet[2533]: I0120 00:49:34.219033 2533 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:49:34.219435 kubelet[2533]: I0120 00:49:34.219382 2533 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:49:34.219588 kubelet[2533]: I0120 00:49:34.219417 2533 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:49:34.219683 kubelet[2533]: I0120 00:49:34.219613 2533 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:49:34.219683 kubelet[2533]: I0120 00:49:34.219624 2533 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:49:34.219683 kubelet[2533]: I0120 00:49:34.219663 2533 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:49:34.219893 kubelet[2533]: I0120 00:49:34.219867 2533 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:49:34.219893 kubelet[2533]: I0120 00:49:34.219892 2533 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:49:34.219935 kubelet[2533]: I0120 00:49:34.219910 2533 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:49:34.219935 kubelet[2533]: I0120 00:49:34.219923 2533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:49:34.221565 kubelet[2533]: I0120 00:49:34.221492 2533 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:49:34.223465 kubelet[2533]: I0120 00:49:34.223403 2533 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:49:34.228569 kubelet[2533]: I0120 00:49:34.228547 2533 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:49:34.228635 kubelet[2533]: I0120 00:49:34.228605 2533 server.go:1289] "Started kubelet" Jan 20 00:49:34.228928 kubelet[2533]: I0120 00:49:34.228821 2533 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:49:34.229407 kubelet[2533]: I0120 00:49:34.229147 2533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:49:34.229536 kubelet[2533]: I0120 00:49:34.229471 2533 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:49:34.231801 kubelet[2533]: I0120 00:49:34.231654 2533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:49:34.233584 kubelet[2533]: I0120 00:49:34.233490 2533 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:49:34.234361 kubelet[2533]: I0120 00:49:34.234345 2533 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:49:34.237198 kubelet[2533]: I0120 00:49:34.236955 2533 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:49:34.240521 kubelet[2533]: I0120 00:49:34.237816 2533 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:49:34.240723 kubelet[2533]: I0120 00:49:34.240672 2533 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:49:34.242661 kubelet[2533]: I0120 00:49:34.242550 2533 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:49:34.242762 kubelet[2533]: I0120 00:49:34.242693 2533 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:49:34.243667 kubelet[2533]: E0120 00:49:34.243635 2533 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:49:34.247380 kubelet[2533]: I0120 00:49:34.247212 2533 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:49:34.264457 kubelet[2533]: I0120 00:49:34.264403 2533 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:49:34.266663 kubelet[2533]: I0120 00:49:34.266615 2533 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:49:34.266663 kubelet[2533]: I0120 00:49:34.266653 2533 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:49:34.266846 kubelet[2533]: I0120 00:49:34.266670 2533 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:49:34.266846 kubelet[2533]: I0120 00:49:34.266677 2533 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:49:34.266846 kubelet[2533]: E0120 00:49:34.266719 2533 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286205 2533 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286218 2533 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286234 2533 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286400 2533 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286411 2533 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286425 2533 policy_none.go:49] "None policy: Start" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286434 2533 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286444 2533 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:49:34.287169 kubelet[2533]: I0120 00:49:34.286546 2533 state_mem.go:75] "Updated machine memory state" Jan 20 00:49:34.292142 kubelet[2533]: E0120 00:49:34.292054 2533 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:49:34.292318 kubelet[2533]: I0120 00:49:34.292270 2533 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:49:34.292352 kubelet[2533]: I0120 00:49:34.292323 2533 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:49:34.292996 kubelet[2533]: I0120 00:49:34.292905 2533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:49:34.294905 kubelet[2533]: E0120 00:49:34.294520 2533 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:49:34.369694 kubelet[2533]: I0120 00:49:34.368173 2533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:34.369694 kubelet[2533]: I0120 00:49:34.368340 2533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:34.369694 kubelet[2533]: I0120 00:49:34.368251 2533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:34.379670 kubelet[2533]: E0120 00:49:34.379501 2533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:34.401502 kubelet[2533]: I0120 00:49:34.400964 2533 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:49:34.406586 sudo[2573]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 00:49:34.407455 sudo[2573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 00:49:34.411443 kubelet[2533]: I0120 00:49:34.411358 2533 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:49:34.411563 kubelet[2533]: I0120 00:49:34.411449 2533 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:49:34.442183 kubelet[2533]: I0120 00:49:34.441975 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:49:34.442183 kubelet[2533]: I0120 00:49:34.442009 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e3e827b001171d71753a1f711ebe65f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3e827b001171d71753a1f711ebe65f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:34.442183 kubelet[2533]: I0120 00:49:34.442029 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:34.442183 kubelet[2533]: I0120 00:49:34.442042 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:34.442183 kubelet[2533]: I0120 00:49:34.442162 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:34.442465 kubelet[2533]: I0120 00:49:34.442190 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e3e827b001171d71753a1f711ebe65f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3e827b001171d71753a1f711ebe65f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:34.442465 kubelet[2533]: I0120 00:49:34.442206 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e3e827b001171d71753a1f711ebe65f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e3e827b001171d71753a1f711ebe65f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:34.442465 kubelet[2533]: I0120 00:49:34.442225 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:34.442465 kubelet[2533]: I0120 00:49:34.442237 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:49:34.683648 kubelet[2533]: E0120 00:49:34.681953 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:34.683648 kubelet[2533]: E0120 00:49:34.682175 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:34.685339 kubelet[2533]: E0120 00:49:34.681947 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:35.042621 sudo[2573]: pam_unix(sudo:session): session closed for user root Jan 20 00:49:35.221292 kubelet[2533]: I0120 00:49:35.221220 2533 apiserver.go:52] "Watching apiserver" Jan 20 00:49:35.239200 kubelet[2533]: I0120 00:49:35.238944 2533 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:49:35.277586 kubelet[2533]: E0120 00:49:35.277012 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:35.277823 kubelet[2533]: I0120 00:49:35.277790 2533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:35.280402 kubelet[2533]: E0120 00:49:35.280282 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:35.284575 kubelet[2533]: E0120 00:49:35.284542 2533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:49:35.284712 kubelet[2533]: E0120 00:49:35.284654 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:35.300424 kubelet[2533]: I0120 00:49:35.300039 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.300027886 podStartE2EDuration="1.300027886s" podCreationTimestamp="2026-01-20 00:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:49:35.290558007 +0000 UTC m=+1.160345840" watchObservedRunningTime="2026-01-20 00:49:35.300027886 +0000 UTC m=+1.169815719" Jan 20 00:49:35.308867 kubelet[2533]: I0120 00:49:35.308659 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.308647884 podStartE2EDuration="1.308647884s" podCreationTimestamp="2026-01-20 00:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:49:35.300368372 +0000 UTC m=+1.170156215" watchObservedRunningTime="2026-01-20 00:49:35.308647884 +0000 UTC m=+1.178435717" Jan 20 00:49:36.278655 kubelet[2533]: E0120 00:49:36.278462 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:36.278655 kubelet[2533]: E0120 00:49:36.278559 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:36.299952 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 20 00:49:36.302389 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:36.305631 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:46768.service: Deactivated successfully. Jan 20 00:49:36.307566 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:49:36.307758 systemd[1]: session-7.scope: Consumed 8.499s CPU time, 163.3M memory peak, 0B memory swap peak. Jan 20 00:49:36.309340 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:49:36.311965 systemd-logind[1447]: Removed session 7. Jan 20 00:49:37.561420 kubelet[2533]: E0120 00:49:37.561090 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:37.708275 kubelet[2533]: E0120 00:49:37.707945 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:38.286862 kubelet[2533]: E0120 00:49:38.286256 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:38.288245 kubelet[2533]: E0120 00:49:38.287425 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:40.194338 kubelet[2533]: I0120 00:49:40.194207 2533 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:49:40.195623 containerd[1464]: time="2026-01-20T00:49:40.195017041Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:49:40.196612 kubelet[2533]: I0120 00:49:40.196574 2533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:49:41.153046 systemd[1]: Created slice kubepods-besteffort-podae47faa0_71ef_48ce_99c1_f5a671c8301e.slice - libcontainer container kubepods-besteffort-podae47faa0_71ef_48ce_99c1_f5a671c8301e.slice. Jan 20 00:49:41.184831 systemd[1]: Created slice kubepods-burstable-pod2e58220d_e169_4168_9f48_ed376e64edc6.slice - libcontainer container kubepods-burstable-pod2e58220d_e169_4168_9f48_ed376e64edc6.slice. Jan 20 00:49:41.197924 kubelet[2533]: I0120 00:49:41.197428 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq96n\" (UniqueName: \"kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-kube-api-access-jq96n\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.197924 kubelet[2533]: I0120 00:49:41.197485 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-run\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.197924 kubelet[2533]: I0120 00:49:41.197512 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-hostproc\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.197924 kubelet[2533]: I0120 00:49:41.197583 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cni-path\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.197924 kubelet[2533]: I0120 00:49:41.197608 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e58220d-e169-4168-9f48-ed376e64edc6-clustermesh-secrets\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.197924 kubelet[2533]: I0120 00:49:41.197624 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae47faa0-71ef-48ce-99c1-f5a671c8301e-kube-proxy\") pod \"kube-proxy-mbcnj\" (UID: \"ae47faa0-71ef-48ce-99c1-f5a671c8301e\") " pod="kube-system/kube-proxy-mbcnj" Jan 20 00:49:41.198626 kubelet[2533]: I0120 00:49:41.197636 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-bpf-maps\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198626 kubelet[2533]: I0120 00:49:41.197662 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-cgroup\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198626 kubelet[2533]: I0120 00:49:41.197677 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-config-path\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198626 kubelet[2533]: I0120 00:49:41.197690 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-etc-cni-netd\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198626 kubelet[2533]: I0120 00:49:41.197702 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-xtables-lock\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198626 kubelet[2533]: I0120 00:49:41.197715 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-net\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198745 kubelet[2533]: I0120 00:49:41.197731 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-hubble-tls\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198745 kubelet[2533]: I0120 00:49:41.197744 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrhr\" (UniqueName: \"kubernetes.io/projected/ae47faa0-71ef-48ce-99c1-f5a671c8301e-kube-api-access-trrhr\") pod \"kube-proxy-mbcnj\" (UID: \"ae47faa0-71ef-48ce-99c1-f5a671c8301e\") " pod="kube-system/kube-proxy-mbcnj" Jan 20 00:49:41.198745 kubelet[2533]: I0120 00:49:41.197758 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae47faa0-71ef-48ce-99c1-f5a671c8301e-xtables-lock\") pod \"kube-proxy-mbcnj\" (UID: \"ae47faa0-71ef-48ce-99c1-f5a671c8301e\") " pod="kube-system/kube-proxy-mbcnj" Jan 20 00:49:41.198745 kubelet[2533]: I0120 00:49:41.197775 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae47faa0-71ef-48ce-99c1-f5a671c8301e-lib-modules\") pod \"kube-proxy-mbcnj\" (UID: \"ae47faa0-71ef-48ce-99c1-f5a671c8301e\") " pod="kube-system/kube-proxy-mbcnj" Jan 20 00:49:41.198745 kubelet[2533]: I0120 00:49:41.197791 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-lib-modules\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.198842 kubelet[2533]: I0120 00:49:41.197813 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-kernel\") pod \"cilium-pmnqp\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " pod="kube-system/cilium-pmnqp" Jan 20 00:49:41.229982 systemd[1]: Created slice kubepods-besteffort-pod6e73917d_1e1e_42c7_90ef_dffc705f7dbf.slice - libcontainer container kubepods-besteffort-pod6e73917d_1e1e_42c7_90ef_dffc705f7dbf.slice. Jan 20 00:49:41.300225 kubelet[2533]: I0120 00:49:41.299198 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnm7\" (UniqueName: \"kubernetes.io/projected/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-kube-api-access-rdnm7\") pod \"cilium-operator-6c4d7847fc-4ff5x\" (UID: \"6e73917d-1e1e-42c7-90ef-dffc705f7dbf\") " pod="kube-system/cilium-operator-6c4d7847fc-4ff5x" Jan 20 00:49:41.300225 kubelet[2533]: I0120 00:49:41.299291 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4ff5x\" (UID: \"6e73917d-1e1e-42c7-90ef-dffc705f7dbf\") " pod="kube-system/cilium-operator-6c4d7847fc-4ff5x" Jan 20 00:49:41.480897 kubelet[2533]: E0120 00:49:41.480627 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:41.482092 containerd[1464]: time="2026-01-20T00:49:41.481569329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbcnj,Uid:ae47faa0-71ef-48ce-99c1-f5a671c8301e,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:41.490928 kubelet[2533]: E0120 00:49:41.490843 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:41.491746 containerd[1464]: time="2026-01-20T00:49:41.491510654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pmnqp,Uid:2e58220d-e169-4168-9f48-ed376e64edc6,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:41.539337 kubelet[2533]: E0120 00:49:41.538343 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:41.544068 containerd[1464]: time="2026-01-20T00:49:41.543606504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4ff5x,Uid:6e73917d-1e1e-42c7-90ef-dffc705f7dbf,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:41.552613 containerd[1464]: time="2026-01-20T00:49:41.551882705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:41.552613 containerd[1464]: time="2026-01-20T00:49:41.552235707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:41.552613 containerd[1464]: time="2026-01-20T00:49:41.552251065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:41.552613 containerd[1464]: time="2026-01-20T00:49:41.552448185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:41.554876 containerd[1464]: time="2026-01-20T00:49:41.554070212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:41.554876 containerd[1464]: time="2026-01-20T00:49:41.554216096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:41.554876 containerd[1464]: time="2026-01-20T00:49:41.554239760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:41.554876 containerd[1464]: time="2026-01-20T00:49:41.554519745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:41.599541 systemd[1]: Started cri-containerd-17c53bef3a90bfcd0e13f13011248ab3f80b291400b89cf80b06c5f204c02159.scope - libcontainer container 17c53bef3a90bfcd0e13f13011248ab3f80b291400b89cf80b06c5f204c02159. Jan 20 00:49:41.602504 systemd[1]: Started cri-containerd-73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c.scope - libcontainer container 73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c. Jan 20 00:49:41.620314 containerd[1464]: time="2026-01-20T00:49:41.619554271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:41.620314 containerd[1464]: time="2026-01-20T00:49:41.619667704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:41.620314 containerd[1464]: time="2026-01-20T00:49:41.619689965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:41.620314 containerd[1464]: time="2026-01-20T00:49:41.619803078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:41.668994 systemd[1]: Started cri-containerd-e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b.scope - libcontainer container e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b. Jan 20 00:49:41.683590 containerd[1464]: time="2026-01-20T00:49:41.683426703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbcnj,Uid:ae47faa0-71ef-48ce-99c1-f5a671c8301e,Namespace:kube-system,Attempt:0,} returns sandbox id \"17c53bef3a90bfcd0e13f13011248ab3f80b291400b89cf80b06c5f204c02159\"" Jan 20 00:49:41.686955 containerd[1464]: time="2026-01-20T00:49:41.686762586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pmnqp,Uid:2e58220d-e169-4168-9f48-ed376e64edc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\"" Jan 20 00:49:41.692033 kubelet[2533]: E0120 00:49:41.691945 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:41.692358 kubelet[2533]: E0120 00:49:41.692249 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:41.694882 containerd[1464]: time="2026-01-20T00:49:41.694791449Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 00:49:41.699716 containerd[1464]: time="2026-01-20T00:49:41.699666810Z" level=info msg="CreateContainer within sandbox \"17c53bef3a90bfcd0e13f13011248ab3f80b291400b89cf80b06c5f204c02159\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:49:41.740180 containerd[1464]: time="2026-01-20T00:49:41.739772930Z" level=info msg="CreateContainer within sandbox \"17c53bef3a90bfcd0e13f13011248ab3f80b291400b89cf80b06c5f204c02159\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b44836d34ec74825b5ea2842788706b5a55a2f7ea3b777dabd91af92a2d1386\"" Jan 20 00:49:41.742186 containerd[1464]: time="2026-01-20T00:49:41.742094239Z" level=info msg="StartContainer for \"4b44836d34ec74825b5ea2842788706b5a55a2f7ea3b777dabd91af92a2d1386\"" Jan 20 00:49:41.804352 systemd[1]: Started cri-containerd-4b44836d34ec74825b5ea2842788706b5a55a2f7ea3b777dabd91af92a2d1386.scope - libcontainer container 4b44836d34ec74825b5ea2842788706b5a55a2f7ea3b777dabd91af92a2d1386. Jan 20 00:49:41.807311 containerd[1464]: time="2026-01-20T00:49:41.807259928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4ff5x,Uid:6e73917d-1e1e-42c7-90ef-dffc705f7dbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\"" Jan 20 00:49:41.812178 kubelet[2533]: E0120 00:49:41.809548 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:41.862699 containerd[1464]: time="2026-01-20T00:49:41.862647701Z" level=info msg="StartContainer for \"4b44836d34ec74825b5ea2842788706b5a55a2f7ea3b777dabd91af92a2d1386\" returns successfully" Jan 20 00:49:41.916774 kubelet[2533]: E0120 00:49:41.916678 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:42.300641 kubelet[2533]: E0120 00:49:42.300014 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:42.300641 kubelet[2533]: E0120 00:49:42.300414 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:42.316733 kubelet[2533]: I0120 00:49:42.316612 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mbcnj" podStartSLOduration=1.316591783 podStartE2EDuration="1.316591783s" podCreationTimestamp="2026-01-20 00:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:49:42.314895163 +0000 UTC m=+8.184682996" watchObservedRunningTime="2026-01-20 00:49:42.316591783 +0000 UTC m=+8.186379635" Jan 20 00:49:43.302139 kubelet[2533]: E0120 00:49:43.302019 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:55.307858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065019938.mount: Deactivated successfully. Jan 20 00:49:58.286769 containerd[1464]: time="2026-01-20T00:49:58.286640831Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:58.288383 containerd[1464]: time="2026-01-20T00:49:58.288311525Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 00:49:58.290564 containerd[1464]: time="2026-01-20T00:49:58.290427617Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:58.291972 containerd[1464]: time="2026-01-20T00:49:58.291906746Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.597054093s" Jan 20 00:49:58.291972 containerd[1464]: time="2026-01-20T00:49:58.291961789Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 00:49:58.295401 containerd[1464]: time="2026-01-20T00:49:58.295029790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 00:49:58.304339 containerd[1464]: time="2026-01-20T00:49:58.303809728Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:49:58.406943 containerd[1464]: time="2026-01-20T00:49:58.406812422Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\"" Jan 20 00:49:58.407878 containerd[1464]: time="2026-01-20T00:49:58.407784041Z" level=info msg="StartContainer for \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\"" Jan 20 00:49:58.465402 systemd[1]: Started cri-containerd-959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff.scope - libcontainer container 959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff. Jan 20 00:49:58.505779 containerd[1464]: time="2026-01-20T00:49:58.505627621Z" level=info msg="StartContainer for \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\" returns successfully" Jan 20 00:49:58.528306 systemd[1]: cri-containerd-959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff.scope: Deactivated successfully. Jan 20 00:49:58.763941 containerd[1464]: time="2026-01-20T00:49:58.763727536Z" level=info msg="shim disconnected" id=959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff namespace=k8s.io Jan 20 00:49:58.763941 containerd[1464]: time="2026-01-20T00:49:58.763918133Z" level=warning msg="cleaning up after shim disconnected" id=959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff namespace=k8s.io Jan 20 00:49:58.763941 containerd[1464]: time="2026-01-20T00:49:58.763934574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:49:59.383706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff-rootfs.mount: Deactivated successfully. Jan 20 00:49:59.408711 kubelet[2533]: E0120 00:49:59.408587 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:59.414326 containerd[1464]: time="2026-01-20T00:49:59.414045599Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:49:59.434685 containerd[1464]: time="2026-01-20T00:49:59.434576592Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\"" Jan 20 00:49:59.435665 containerd[1464]: time="2026-01-20T00:49:59.435563569Z" level=info msg="StartContainer for \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\"" Jan 20 00:49:59.485348 systemd[1]: Started cri-containerd-ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7.scope - libcontainer container ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7. Jan 20 00:49:59.534038 containerd[1464]: time="2026-01-20T00:49:59.533968687Z" level=info msg="StartContainer for \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\" returns successfully" Jan 20 00:49:59.560605 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:49:59.560973 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:49:59.561086 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:49:59.570710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:49:59.571093 systemd[1]: cri-containerd-ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7.scope: Deactivated successfully. Jan 20 00:49:59.602567 containerd[1464]: time="2026-01-20T00:49:59.602447206Z" level=info msg="shim disconnected" id=ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7 namespace=k8s.io Jan 20 00:49:59.602567 containerd[1464]: time="2026-01-20T00:49:59.602566108Z" level=warning msg="cleaning up after shim disconnected" id=ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7 namespace=k8s.io Jan 20 00:49:59.602808 containerd[1464]: time="2026-01-20T00:49:59.602577008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:49:59.604072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:50:00.383841 systemd[1]: run-containerd-runc-k8s.io-ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7-runc.6YuQtf.mount: Deactivated successfully. Jan 20 00:50:00.383990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7-rootfs.mount: Deactivated successfully. Jan 20 00:50:00.414588 kubelet[2533]: E0120 00:50:00.414545 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:00.423775 containerd[1464]: time="2026-01-20T00:50:00.423703981Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:50:00.457630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819281971.mount: Deactivated successfully. Jan 20 00:50:00.461712 containerd[1464]: time="2026-01-20T00:50:00.461601174Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\"" Jan 20 00:50:00.463904 containerd[1464]: time="2026-01-20T00:50:00.463840785Z" level=info msg="StartContainer for \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\"" Jan 20 00:50:00.513350 systemd[1]: Started cri-containerd-1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a.scope - libcontainer container 1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a. Jan 20 00:50:00.581474 containerd[1464]: time="2026-01-20T00:50:00.581062022Z" level=info msg="StartContainer for \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\" returns successfully" Jan 20 00:50:00.583046 systemd[1]: cri-containerd-1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a.scope: Deactivated successfully. Jan 20 00:50:00.653432 containerd[1464]: time="2026-01-20T00:50:00.653199589Z" level=info msg="shim disconnected" id=1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a namespace=k8s.io Jan 20 00:50:00.653432 containerd[1464]: time="2026-01-20T00:50:00.653277165Z" level=warning msg="cleaning up after shim disconnected" id=1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a namespace=k8s.io Jan 20 00:50:00.653432 containerd[1464]: time="2026-01-20T00:50:00.653288136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:50:00.909410 containerd[1464]: time="2026-01-20T00:50:00.909056598Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:00.910817 containerd[1464]: time="2026-01-20T00:50:00.910689219Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 00:50:00.912173 containerd[1464]: time="2026-01-20T00:50:00.911969805Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:00.913288 containerd[1464]: time="2026-01-20T00:50:00.913226112Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.618149083s" Jan 20 00:50:00.913288 containerd[1464]: time="2026-01-20T00:50:00.913278901Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 00:50:00.920213 containerd[1464]: time="2026-01-20T00:50:00.920055755Z" level=info msg="CreateContainer within sandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 00:50:00.936285 containerd[1464]: time="2026-01-20T00:50:00.936077213Z" level=info msg="CreateContainer within sandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\"" Jan 20 00:50:00.936977 containerd[1464]: time="2026-01-20T00:50:00.936897451Z" level=info msg="StartContainer for \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\"" Jan 20 00:50:00.992591 systemd[1]: Started cri-containerd-2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c.scope - libcontainer container 2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c. Jan 20 00:50:01.029699 containerd[1464]: time="2026-01-20T00:50:01.029580230Z" level=info msg="StartContainer for \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\" returns successfully" Jan 20 00:50:01.386603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a-rootfs.mount: Deactivated successfully. Jan 20 00:50:01.426013 kubelet[2533]: E0120 00:50:01.425932 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:01.436549 kubelet[2533]: E0120 00:50:01.436424 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:01.485926 containerd[1464]: time="2026-01-20T00:50:01.485783985Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:50:01.495946 kubelet[2533]: I0120 00:50:01.495789 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4ff5x" podStartSLOduration=1.397311438 podStartE2EDuration="20.495769453s" podCreationTimestamp="2026-01-20 00:49:41 +0000 UTC" firstStartedPulling="2026-01-20 00:49:41.816454914 +0000 UTC m=+7.686242747" lastFinishedPulling="2026-01-20 00:50:00.914912928 +0000 UTC m=+26.784700762" observedRunningTime="2026-01-20 00:50:01.49462158 +0000 UTC m=+27.364409433" watchObservedRunningTime="2026-01-20 00:50:01.495769453 +0000 UTC m=+27.365557286" Jan 20 00:50:01.519889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134218326.mount: Deactivated successfully. Jan 20 00:50:01.532209 containerd[1464]: time="2026-01-20T00:50:01.530271470Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\"" Jan 20 00:50:01.532917 containerd[1464]: time="2026-01-20T00:50:01.532844644Z" level=info msg="StartContainer for \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\"" Jan 20 00:50:01.633406 systemd[1]: Started cri-containerd-181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df.scope - libcontainer container 181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df. Jan 20 00:50:01.759173 systemd[1]: cri-containerd-181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df.scope: Deactivated successfully. Jan 20 00:50:01.760024 containerd[1464]: time="2026-01-20T00:50:01.759488617Z" level=info msg="StartContainer for \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\" returns successfully" Jan 20 00:50:01.870256 containerd[1464]: time="2026-01-20T00:50:01.870157640Z" level=info msg="shim disconnected" id=181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df namespace=k8s.io Jan 20 00:50:01.870256 containerd[1464]: time="2026-01-20T00:50:01.870241999Z" level=warning msg="cleaning up after shim disconnected" id=181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df namespace=k8s.io Jan 20 00:50:01.870256 containerd[1464]: time="2026-01-20T00:50:01.870254612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:50:02.383589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df-rootfs.mount: Deactivated successfully. Jan 20 00:50:02.443907 kubelet[2533]: E0120 00:50:02.443859 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:02.444676 kubelet[2533]: E0120 00:50:02.444014 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:02.477768 containerd[1464]: time="2026-01-20T00:50:02.477598264Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:50:02.545555 containerd[1464]: time="2026-01-20T00:50:02.545395261Z" level=info msg="CreateContainer within sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\"" Jan 20 00:50:02.547623 containerd[1464]: time="2026-01-20T00:50:02.547305199Z" level=info msg="StartContainer for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\"" Jan 20 00:50:02.597395 systemd[1]: Started cri-containerd-4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4.scope - libcontainer container 4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4. Jan 20 00:50:02.650615 containerd[1464]: time="2026-01-20T00:50:02.650354696Z" level=info msg="StartContainer for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" returns successfully" Jan 20 00:50:02.819152 kubelet[2533]: I0120 00:50:02.819073 2533 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:50:02.881228 kubelet[2533]: I0120 00:50:02.881171 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89da7a64-e168-41dc-b186-d2fb563ea5c0-config-volume\") pod \"coredns-674b8bbfcf-4zgll\" (UID: \"89da7a64-e168-41dc-b186-d2fb563ea5c0\") " pod="kube-system/coredns-674b8bbfcf-4zgll" Jan 20 00:50:02.881228 kubelet[2533]: I0120 00:50:02.881232 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v592\" (UniqueName: \"kubernetes.io/projected/e9b5b7e8-1d9c-4dcb-a6e9-e2cc6a722a82-kube-api-access-6v592\") pod \"coredns-674b8bbfcf-zzt87\" (UID: \"e9b5b7e8-1d9c-4dcb-a6e9-e2cc6a722a82\") " pod="kube-system/coredns-674b8bbfcf-zzt87" Jan 20 00:50:02.881228 kubelet[2533]: I0120 00:50:02.881255 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbstc\" (UniqueName: \"kubernetes.io/projected/89da7a64-e168-41dc-b186-d2fb563ea5c0-kube-api-access-kbstc\") pod \"coredns-674b8bbfcf-4zgll\" (UID: \"89da7a64-e168-41dc-b186-d2fb563ea5c0\") " pod="kube-system/coredns-674b8bbfcf-4zgll" Jan 20 00:50:02.881228 kubelet[2533]: I0120 00:50:02.881271 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9b5b7e8-1d9c-4dcb-a6e9-e2cc6a722a82-config-volume\") pod \"coredns-674b8bbfcf-zzt87\" (UID: \"e9b5b7e8-1d9c-4dcb-a6e9-e2cc6a722a82\") " pod="kube-system/coredns-674b8bbfcf-zzt87" Jan 20 00:50:02.896589 systemd[1]: Created slice kubepods-burstable-pod89da7a64_e168_41dc_b186_d2fb563ea5c0.slice - libcontainer container kubepods-burstable-pod89da7a64_e168_41dc_b186_d2fb563ea5c0.slice. Jan 20 00:50:02.910352 systemd[1]: Created slice kubepods-burstable-pode9b5b7e8_1d9c_4dcb_a6e9_e2cc6a722a82.slice - libcontainer container kubepods-burstable-pode9b5b7e8_1d9c_4dcb_a6e9_e2cc6a722a82.slice. Jan 20 00:50:03.208876 kubelet[2533]: E0120 00:50:03.208728 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:03.211002 containerd[1464]: time="2026-01-20T00:50:03.210659876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4zgll,Uid:89da7a64-e168-41dc-b186-d2fb563ea5c0,Namespace:kube-system,Attempt:0,}" Jan 20 00:50:03.215251 kubelet[2533]: E0120 00:50:03.215189 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:03.216177 containerd[1464]: time="2026-01-20T00:50:03.216002212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zzt87,Uid:e9b5b7e8-1d9c-4dcb-a6e9-e2cc6a722a82,Namespace:kube-system,Attempt:0,}" Jan 20 00:50:03.393609 systemd[1]: run-containerd-runc-k8s.io-4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4-runc.fzMwUs.mount: Deactivated successfully. Jan 20 00:50:03.451485 kubelet[2533]: E0120 00:50:03.451426 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:04.454547 kubelet[2533]: E0120 00:50:04.454387 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:05.221878 systemd-networkd[1392]: cilium_host: Link UP Jan 20 00:50:05.226309 systemd-networkd[1392]: cilium_net: Link UP Jan 20 00:50:05.227600 systemd-networkd[1392]: cilium_net: Gained carrier Jan 20 00:50:05.227866 systemd-networkd[1392]: cilium_host: Gained carrier Jan 20 00:50:05.424610 systemd-networkd[1392]: cilium_net: Gained IPv6LL Jan 20 00:50:05.437199 systemd-networkd[1392]: cilium_vxlan: Link UP Jan 20 00:50:05.437214 systemd-networkd[1392]: cilium_vxlan: Gained carrier Jan 20 00:50:05.456750 kubelet[2533]: E0120 00:50:05.456639 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:05.600569 systemd-networkd[1392]: cilium_host: Gained IPv6LL Jan 20 00:50:05.774217 kernel: NET: Registered PF_ALG protocol family Jan 20 00:50:06.776650 systemd-networkd[1392]: lxc_health: Link UP Jan 20 00:50:06.785704 systemd-networkd[1392]: lxc_health: Gained carrier Jan 20 00:50:06.792521 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Jan 20 00:50:07.332929 systemd-networkd[1392]: lxc542781ae142c: Link UP Jan 20 00:50:07.345167 kernel: eth0: renamed from tmp5c9ee Jan 20 00:50:07.349623 systemd-networkd[1392]: lxc542781ae142c: Gained carrier Jan 20 00:50:07.366450 systemd-networkd[1392]: lxc17001f2b9b33: Link UP Jan 20 00:50:07.376393 kernel: eth0: renamed from tmpd652a Jan 20 00:50:07.386714 systemd-networkd[1392]: lxc17001f2b9b33: Gained carrier Jan 20 00:50:07.494148 kubelet[2533]: E0120 00:50:07.493974 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:07.525234 kubelet[2533]: I0120 00:50:07.525013 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pmnqp" podStartSLOduration=9.924353995 podStartE2EDuration="26.52499569s" podCreationTimestamp="2026-01-20 00:49:41 +0000 UTC" firstStartedPulling="2026-01-20 00:49:41.693887058 +0000 UTC m=+7.563674891" lastFinishedPulling="2026-01-20 00:49:58.294528753 +0000 UTC m=+24.164316586" observedRunningTime="2026-01-20 00:50:03.482597849 +0000 UTC m=+29.352385711" watchObservedRunningTime="2026-01-20 00:50:07.52499569 +0000 UTC m=+33.394783523" Jan 20 00:50:08.392406 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jan 20 00:50:08.457041 systemd-networkd[1392]: lxc542781ae142c: Gained IPv6LL Jan 20 00:50:08.464224 kubelet[2533]: E0120 00:50:08.464191 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:09.099424 systemd-networkd[1392]: lxc17001f2b9b33: Gained IPv6LL Jan 20 00:50:09.466450 kubelet[2533]: E0120 00:50:09.466249 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:11.796013 containerd[1464]: time="2026-01-20T00:50:11.795502635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:50:11.796013 containerd[1464]: time="2026-01-20T00:50:11.795619615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:50:11.796013 containerd[1464]: time="2026-01-20T00:50:11.795634753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:11.796727 containerd[1464]: time="2026-01-20T00:50:11.795854884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:11.821229 containerd[1464]: time="2026-01-20T00:50:11.820984442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:50:11.822848 containerd[1464]: time="2026-01-20T00:50:11.821815115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:50:11.824157 containerd[1464]: time="2026-01-20T00:50:11.823898702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:11.825477 containerd[1464]: time="2026-01-20T00:50:11.824674564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:11.827290 systemd[1]: run-containerd-runc-k8s.io-d652aa8ec7a643fbbd431ecd319edb49f1fcf737cc0652d628fc7d51420cf5d9-runc.NnGQYB.mount: Deactivated successfully. Jan 20 00:50:11.845839 systemd[1]: Started cri-containerd-d652aa8ec7a643fbbd431ecd319edb49f1fcf737cc0652d628fc7d51420cf5d9.scope - libcontainer container d652aa8ec7a643fbbd431ecd319edb49f1fcf737cc0652d628fc7d51420cf5d9. Jan 20 00:50:11.874085 systemd[1]: Started cri-containerd-5c9eee494c0e5f99f4aa8d91e7b92e4c0bebe877272afcf89f190a9deb5e8a0d.scope - libcontainer container 5c9eee494c0e5f99f4aa8d91e7b92e4c0bebe877272afcf89f190a9deb5e8a0d. Jan 20 00:50:11.891299 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:50:11.897965 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:50:11.936184 containerd[1464]: time="2026-01-20T00:50:11.936070868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zzt87,Uid:e9b5b7e8-1d9c-4dcb-a6e9-e2cc6a722a82,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c9eee494c0e5f99f4aa8d91e7b92e4c0bebe877272afcf89f190a9deb5e8a0d\"" Jan 20 00:50:11.939469 containerd[1464]: time="2026-01-20T00:50:11.939409870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4zgll,Uid:89da7a64-e168-41dc-b186-d2fb563ea5c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d652aa8ec7a643fbbd431ecd319edb49f1fcf737cc0652d628fc7d51420cf5d9\"" Jan 20 00:50:11.939884 kubelet[2533]: E0120 00:50:11.939680 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:11.942711 kubelet[2533]: E0120 00:50:11.942442 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:11.958203 containerd[1464]: time="2026-01-20T00:50:11.958151905Z" level=info msg="CreateContainer within sandbox \"d652aa8ec7a643fbbd431ecd319edb49f1fcf737cc0652d628fc7d51420cf5d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:50:11.961792 containerd[1464]: time="2026-01-20T00:50:11.961597132Z" level=info msg="CreateContainer within sandbox \"5c9eee494c0e5f99f4aa8d91e7b92e4c0bebe877272afcf89f190a9deb5e8a0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:50:11.984400 containerd[1464]: time="2026-01-20T00:50:11.984262275Z" level=info msg="CreateContainer within sandbox \"5c9eee494c0e5f99f4aa8d91e7b92e4c0bebe877272afcf89f190a9deb5e8a0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"169e276ae93e86626ddcc98188eaff169e193f1c133dea91cbd6cc6e6dc1ed78\"" Jan 20 00:50:11.985237 containerd[1464]: time="2026-01-20T00:50:11.985193437Z" level=info msg="StartContainer for \"169e276ae93e86626ddcc98188eaff169e193f1c133dea91cbd6cc6e6dc1ed78\"" Jan 20 00:50:11.990869 containerd[1464]: time="2026-01-20T00:50:11.990793442Z" level=info msg="CreateContainer within sandbox \"d652aa8ec7a643fbbd431ecd319edb49f1fcf737cc0652d628fc7d51420cf5d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c404e290c4b7eab2a6850cb2557d1fad085be39bd2e421fcf26821984cb163b6\"" Jan 20 00:50:11.992648 containerd[1464]: time="2026-01-20T00:50:11.991858834Z" level=info msg="StartContainer for \"c404e290c4b7eab2a6850cb2557d1fad085be39bd2e421fcf26821984cb163b6\"" Jan 20 00:50:12.037337 systemd[1]: Started cri-containerd-c404e290c4b7eab2a6850cb2557d1fad085be39bd2e421fcf26821984cb163b6.scope - libcontainer container c404e290c4b7eab2a6850cb2557d1fad085be39bd2e421fcf26821984cb163b6. Jan 20 00:50:12.044834 systemd[1]: Started cri-containerd-169e276ae93e86626ddcc98188eaff169e193f1c133dea91cbd6cc6e6dc1ed78.scope - libcontainer container 169e276ae93e86626ddcc98188eaff169e193f1c133dea91cbd6cc6e6dc1ed78. Jan 20 00:50:12.101698 containerd[1464]: time="2026-01-20T00:50:12.101440502Z" level=info msg="StartContainer for \"c404e290c4b7eab2a6850cb2557d1fad085be39bd2e421fcf26821984cb163b6\" returns successfully" Jan 20 00:50:12.110479 containerd[1464]: time="2026-01-20T00:50:12.110326096Z" level=info msg="StartContainer for \"169e276ae93e86626ddcc98188eaff169e193f1c133dea91cbd6cc6e6dc1ed78\" returns successfully" Jan 20 00:50:12.474723 kubelet[2533]: E0120 00:50:12.474433 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:12.480915 kubelet[2533]: E0120 00:50:12.480822 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:12.513346 kubelet[2533]: I0120 00:50:12.513241 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4zgll" podStartSLOduration=31.513220354 podStartE2EDuration="31.513220354s" podCreationTimestamp="2026-01-20 00:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:50:12.49280115 +0000 UTC m=+38.362588993" watchObservedRunningTime="2026-01-20 00:50:12.513220354 +0000 UTC m=+38.383008197" Jan 20 00:50:13.483006 kubelet[2533]: E0120 00:50:13.482949 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:13.483521 kubelet[2533]: E0120 00:50:13.483028 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:14.485844 kubelet[2533]: E0120 00:50:14.485806 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:14.486409 kubelet[2533]: E0120 00:50:14.486010 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:24.950849 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:50214.service - OpenSSH per-connection server daemon (10.0.0.1:50214). Jan 20 00:50:25.011181 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 50214 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:25.014005 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:25.020043 systemd-logind[1447]: New session 8 of user core. Jan 20 00:50:25.033516 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:50:25.439029 sshd[3943]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:25.457885 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:50214.service: Deactivated successfully. Jan 20 00:50:25.460359 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:50:25.461480 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:50:25.463574 systemd-logind[1447]: Removed session 8. Jan 20 00:50:30.470397 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:50226.service - OpenSSH per-connection server daemon (10.0.0.1:50226). Jan 20 00:50:30.517005 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:30.519704 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:30.525831 systemd-logind[1447]: New session 9 of user core. Jan 20 00:50:30.534416 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:50:30.678862 sshd[3964]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:30.684424 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:50226.service: Deactivated successfully. Jan 20 00:50:30.687079 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:50:30.688592 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:50:30.690324 systemd-logind[1447]: Removed session 9. Jan 20 00:50:35.693517 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:46540.service - OpenSSH per-connection server daemon (10.0.0.1:46540). Jan 20 00:50:35.737042 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 46540 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:35.739877 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:35.753360 systemd-logind[1447]: New session 10 of user core. Jan 20 00:50:35.764502 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:50:35.918289 sshd[3981]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:35.925587 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:46540.service: Deactivated successfully. Jan 20 00:50:35.929317 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:50:35.930871 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:50:35.933959 systemd-logind[1447]: Removed session 10. Jan 20 00:50:40.933269 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:46554.service - OpenSSH per-connection server daemon (10.0.0.1:46554). Jan 20 00:50:40.992404 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 46554 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:40.994983 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:41.004889 systemd-logind[1447]: New session 11 of user core. Jan 20 00:50:41.015377 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:50:41.166949 sshd[3996]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:41.172592 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:46554.service: Deactivated successfully. Jan 20 00:50:41.175547 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:50:41.177078 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:50:41.179374 systemd-logind[1447]: Removed session 11. Jan 20 00:50:46.188758 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:33512.service - OpenSSH per-connection server daemon (10.0.0.1:33512). Jan 20 00:50:46.237420 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 33512 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:46.242016 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:46.249666 systemd-logind[1447]: New session 12 of user core. Jan 20 00:50:46.258439 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:50:46.425664 sshd[4013]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:46.431502 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:33512.service: Deactivated successfully. Jan 20 00:50:46.435065 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:50:46.436795 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:50:46.458173 systemd-logind[1447]: Removed session 12. Jan 20 00:50:49.268089 kubelet[2533]: E0120 00:50:49.267987 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:51.438277 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:33514.service - OpenSSH per-connection server daemon (10.0.0.1:33514). Jan 20 00:50:51.479645 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 33514 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:51.481345 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:51.487624 systemd-logind[1447]: New session 13 of user core. Jan 20 00:50:51.500285 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:50:51.620375 sshd[4029]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:51.624855 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:33514.service: Deactivated successfully. Jan 20 00:50:51.627278 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:50:51.628358 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:50:51.630284 systemd-logind[1447]: Removed session 13. Jan 20 00:50:55.268436 kubelet[2533]: E0120 00:50:55.268301 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:56.635709 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:55666.service - OpenSSH per-connection server daemon (10.0.0.1:55666). Jan 20 00:50:56.681026 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:56.683204 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:56.689259 systemd-logind[1447]: New session 14 of user core. Jan 20 00:50:56.695419 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:50:56.837603 sshd[4044]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:56.846579 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:55666.service: Deactivated successfully. Jan 20 00:50:56.848693 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:50:56.850771 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:50:56.856544 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:55678.service - OpenSSH per-connection server daemon (10.0.0.1:55678). Jan 20 00:50:56.857996 systemd-logind[1447]: Removed session 14. Jan 20 00:50:56.897825 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 55678 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:56.900338 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:56.906776 systemd-logind[1447]: New session 15 of user core. Jan 20 00:50:56.917378 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:50:57.102208 sshd[4060]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:57.119435 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:55678.service: Deactivated successfully. Jan 20 00:50:57.122508 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:50:57.126310 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:50:57.142062 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:55686.service - OpenSSH per-connection server daemon (10.0.0.1:55686). Jan 20 00:50:57.144433 systemd-logind[1447]: Removed session 15. Jan 20 00:50:57.178087 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 55686 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:50:57.180207 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:57.185450 systemd-logind[1447]: New session 16 of user core. Jan 20 00:50:57.199406 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:50:57.268446 kubelet[2533]: E0120 00:50:57.268408 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:57.331413 sshd[4072]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:57.336022 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:55686.service: Deactivated successfully. Jan 20 00:50:57.338619 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:50:57.340083 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:50:57.342041 systemd-logind[1447]: Removed session 16. Jan 20 00:51:02.344252 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:55692.service - OpenSSH per-connection server daemon (10.0.0.1:55692). Jan 20 00:51:02.384338 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 55692 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:02.386094 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:02.391277 systemd-logind[1447]: New session 17 of user core. Jan 20 00:51:02.401296 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:51:02.519332 sshd[4087]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:02.523546 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:55692.service: Deactivated successfully. Jan 20 00:51:02.525452 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:51:02.526283 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:51:02.527577 systemd-logind[1447]: Removed session 17. Jan 20 00:51:05.268333 kubelet[2533]: E0120 00:51:05.268266 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:07.538485 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:42222.service - OpenSSH per-connection server daemon (10.0.0.1:42222). Jan 20 00:51:07.580032 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 42222 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:07.582341 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:07.588224 systemd-logind[1447]: New session 18 of user core. Jan 20 00:51:07.597395 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:51:07.735550 sshd[4102]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:07.762985 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:42222.service: Deactivated successfully. Jan 20 00:51:07.765485 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:51:07.767247 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:51:07.773717 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:42238.service - OpenSSH per-connection server daemon (10.0.0.1:42238). Jan 20 00:51:07.775161 systemd-logind[1447]: Removed session 18. Jan 20 00:51:07.809550 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:07.811314 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:07.817567 systemd-logind[1447]: New session 19 of user core. Jan 20 00:51:07.825454 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:51:08.083209 sshd[4116]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:08.098585 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:42238.service: Deactivated successfully. Jan 20 00:51:08.100804 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:51:08.102391 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:51:08.113449 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:42248.service - OpenSSH per-connection server daemon (10.0.0.1:42248). Jan 20 00:51:08.114724 systemd-logind[1447]: Removed session 19. Jan 20 00:51:08.151623 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 42248 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:08.153459 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:08.159143 systemd-logind[1447]: New session 20 of user core. Jan 20 00:51:08.170291 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:51:08.731389 sshd[4129]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:08.743899 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:42248.service: Deactivated successfully. Jan 20 00:51:08.748021 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:51:08.750570 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:51:08.756799 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:42258.service - OpenSSH per-connection server daemon (10.0.0.1:42258). Jan 20 00:51:08.758392 systemd-logind[1447]: Removed session 20. Jan 20 00:51:08.797222 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 42258 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:08.799685 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:08.805958 systemd-logind[1447]: New session 21 of user core. Jan 20 00:51:08.814492 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:51:09.133059 sshd[4148]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:09.144489 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:42258.service: Deactivated successfully. Jan 20 00:51:09.147199 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:51:09.149705 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:51:09.156686 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:42262.service - OpenSSH per-connection server daemon (10.0.0.1:42262). Jan 20 00:51:09.158539 systemd-logind[1447]: Removed session 21. Jan 20 00:51:09.199302 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 42262 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:09.201316 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:09.207819 systemd-logind[1447]: New session 22 of user core. Jan 20 00:51:09.216320 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:51:09.334683 sshd[4161]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:09.340017 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:42262.service: Deactivated successfully. Jan 20 00:51:09.342067 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:51:09.343012 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:51:09.344682 systemd-logind[1447]: Removed session 22. Jan 20 00:51:14.385647 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:58378.service - OpenSSH per-connection server daemon (10.0.0.1:58378). Jan 20 00:51:14.481209 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 58378 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:14.484364 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:14.526173 systemd-logind[1447]: New session 23 of user core. Jan 20 00:51:14.548417 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:51:14.739462 sshd[4177]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:14.747635 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:58378.service: Deactivated successfully. Jan 20 00:51:14.749767 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:51:14.750955 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:51:14.752898 systemd-logind[1447]: Removed session 23. Jan 20 00:51:15.267605 kubelet[2533]: E0120 00:51:15.267523 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:19.753318 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:58392.service - OpenSSH per-connection server daemon (10.0.0.1:58392). Jan 20 00:51:19.795325 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:19.798374 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:19.804988 systemd-logind[1447]: New session 24 of user core. Jan 20 00:51:19.819410 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:51:19.959337 sshd[4192]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:19.964465 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:58392.service: Deactivated successfully. Jan 20 00:51:19.967497 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:51:19.968640 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:51:19.970744 systemd-logind[1447]: Removed session 24. Jan 20 00:51:24.975934 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:49510.service - OpenSSH per-connection server daemon (10.0.0.1:49510). Jan 20 00:51:25.021296 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 49510 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:25.023291 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:25.028780 systemd-logind[1447]: New session 25 of user core. Jan 20 00:51:25.034293 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:51:25.147382 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:25.151553 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:49510.service: Deactivated successfully. Jan 20 00:51:25.153513 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:51:25.154437 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:51:25.155960 systemd-logind[1447]: Removed session 25. Jan 20 00:51:29.267838 kubelet[2533]: E0120 00:51:29.267758 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:30.163771 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:49514.service - OpenSSH per-connection server daemon (10.0.0.1:49514). Jan 20 00:51:30.211745 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 49514 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:30.214464 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:30.219814 systemd-logind[1447]: New session 26 of user core. Jan 20 00:51:30.227279 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:51:30.267909 kubelet[2533]: E0120 00:51:30.267794 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:30.346362 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:30.362767 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:49514.service: Deactivated successfully. Jan 20 00:51:30.364608 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:51:30.366207 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:51:30.371426 systemd[1]: Started sshd@26-10.0.0.135:22-10.0.0.1:49518.service - OpenSSH per-connection server daemon (10.0.0.1:49518). Jan 20 00:51:30.372400 systemd-logind[1447]: Removed session 26. Jan 20 00:51:30.404844 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 49518 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:30.406272 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:30.410549 systemd-logind[1447]: New session 27 of user core. Jan 20 00:51:30.420278 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 00:51:31.750311 kubelet[2533]: I0120 00:51:31.748633 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zzt87" podStartSLOduration=110.748544123 podStartE2EDuration="1m50.748544123s" podCreationTimestamp="2026-01-20 00:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:50:12.531855902 +0000 UTC m=+38.401643735" watchObservedRunningTime="2026-01-20 00:51:31.748544123 +0000 UTC m=+117.618331956" Jan 20 00:51:31.761054 containerd[1464]: time="2026-01-20T00:51:31.760991734Z" level=info msg="StopContainer for \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\" with timeout 30 (s)" Jan 20 00:51:31.762746 containerd[1464]: time="2026-01-20T00:51:31.762479025Z" level=info msg="Stop container \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\" with signal terminated" Jan 20 00:51:31.793966 systemd[1]: cri-containerd-2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c.scope: Deactivated successfully. Jan 20 00:51:31.821624 containerd[1464]: time="2026-01-20T00:51:31.820903397Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:51:31.832613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c-rootfs.mount: Deactivated successfully. Jan 20 00:51:31.835083 containerd[1464]: time="2026-01-20T00:51:31.835019912Z" level=info msg="StopContainer for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" with timeout 2 (s)" Jan 20 00:51:31.835591 containerd[1464]: time="2026-01-20T00:51:31.835534960Z" level=info msg="Stop container \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" with signal terminated" Jan 20 00:51:31.844194 containerd[1464]: time="2026-01-20T00:51:31.844066362Z" level=info msg="shim disconnected" id=2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c namespace=k8s.io Jan 20 00:51:31.845313 containerd[1464]: time="2026-01-20T00:51:31.845188020Z" level=warning msg="cleaning up after shim disconnected" id=2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c namespace=k8s.io Jan 20 00:51:31.845735 containerd[1464]: time="2026-01-20T00:51:31.845450720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:31.848235 systemd-networkd[1392]: lxc_health: Link DOWN Jan 20 00:51:31.848261 systemd-networkd[1392]: lxc_health: Lost carrier Jan 20 00:51:31.871779 systemd[1]: cri-containerd-4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4.scope: Deactivated successfully. Jan 20 00:51:31.872439 systemd[1]: cri-containerd-4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4.scope: Consumed 10.847s CPU time. Jan 20 00:51:31.884559 containerd[1464]: time="2026-01-20T00:51:31.884205543Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:51:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:51:31.890917 containerd[1464]: time="2026-01-20T00:51:31.890814208Z" level=info msg="StopContainer for \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\" returns successfully" Jan 20 00:51:31.892040 containerd[1464]: time="2026-01-20T00:51:31.891961154Z" level=info msg="StopPodSandbox for \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\"" Jan 20 00:51:31.892040 containerd[1464]: time="2026-01-20T00:51:31.892022239Z" level=info msg="Container to stop \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:51:31.894508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b-shm.mount: Deactivated successfully. Jan 20 00:51:31.902312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4-rootfs.mount: Deactivated successfully. Jan 20 00:51:31.907189 systemd[1]: cri-containerd-e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b.scope: Deactivated successfully. Jan 20 00:51:31.916442 containerd[1464]: time="2026-01-20T00:51:31.916363554Z" level=info msg="shim disconnected" id=4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4 namespace=k8s.io Jan 20 00:51:31.916442 containerd[1464]: time="2026-01-20T00:51:31.916446489Z" level=warning msg="cleaning up after shim disconnected" id=4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4 namespace=k8s.io Jan 20 00:51:31.916442 containerd[1464]: time="2026-01-20T00:51:31.916466637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:31.937226 containerd[1464]: time="2026-01-20T00:51:31.937048356Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:51:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:51:31.941461 containerd[1464]: time="2026-01-20T00:51:31.941437635Z" level=info msg="StopContainer for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" returns successfully" Jan 20 00:51:31.942359 containerd[1464]: time="2026-01-20T00:51:31.942327537Z" level=info msg="StopPodSandbox for \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\"" Jan 20 00:51:31.942513 containerd[1464]: time="2026-01-20T00:51:31.942472198Z" level=info msg="Container to stop \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:51:31.942557 containerd[1464]: time="2026-01-20T00:51:31.942518114Z" level=info msg="Container to stop \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:51:31.942557 containerd[1464]: time="2026-01-20T00:51:31.942534505Z" level=info msg="Container to stop \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:51:31.942557 containerd[1464]: time="2026-01-20T00:51:31.942548751Z" level=info msg="Container to stop \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:51:31.942628 containerd[1464]: time="2026-01-20T00:51:31.942562456Z" level=info msg="Container to stop \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:51:31.950833 systemd[1]: cri-containerd-73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c.scope: Deactivated successfully. Jan 20 00:51:31.965630 containerd[1464]: time="2026-01-20T00:51:31.965576302Z" level=info msg="shim disconnected" id=e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b namespace=k8s.io Jan 20 00:51:31.966075 containerd[1464]: time="2026-01-20T00:51:31.966032816Z" level=warning msg="cleaning up after shim disconnected" id=e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b namespace=k8s.io Jan 20 00:51:31.966178 containerd[1464]: time="2026-01-20T00:51:31.966076227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:31.984404 containerd[1464]: time="2026-01-20T00:51:31.984282282Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:51:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:51:31.995162 containerd[1464]: time="2026-01-20T00:51:31.995084089Z" level=info msg="TearDown network for sandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" successfully" Jan 20 00:51:31.995590 containerd[1464]: time="2026-01-20T00:51:31.995292259Z" level=info msg="StopPodSandbox for \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" returns successfully" Jan 20 00:51:31.995590 containerd[1464]: time="2026-01-20T00:51:31.995271027Z" level=info msg="shim disconnected" id=73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c namespace=k8s.io Jan 20 00:51:31.995590 containerd[1464]: time="2026-01-20T00:51:31.995441818Z" level=warning msg="cleaning up after shim disconnected" id=73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c namespace=k8s.io Jan 20 00:51:31.995590 containerd[1464]: time="2026-01-20T00:51:31.995451446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:32.017927 containerd[1464]: time="2026-01-20T00:51:32.017809171Z" level=info msg="TearDown network for sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" successfully" Jan 20 00:51:32.017927 containerd[1464]: time="2026-01-20T00:51:32.017920979Z" level=info msg="StopPodSandbox for \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" returns successfully" Jan 20 00:51:32.096988 kubelet[2533]: I0120 00:51:32.096888 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-net\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.096988 kubelet[2533]: I0120 00:51:32.096936 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-hubble-tls\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.096988 kubelet[2533]: I0120 00:51:32.096957 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdnm7\" (UniqueName: \"kubernetes.io/projected/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-kube-api-access-rdnm7\") pod \"6e73917d-1e1e-42c7-90ef-dffc705f7dbf\" (UID: \"6e73917d-1e1e-42c7-90ef-dffc705f7dbf\") " Jan 20 00:51:32.096988 kubelet[2533]: I0120 00:51:32.096970 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.096988 kubelet[2533]: I0120 00:51:32.096984 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq96n\" (UniqueName: \"kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-kube-api-access-jq96n\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.096988 kubelet[2533]: I0120 00:51:32.097000 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-xtables-lock\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097378 kubelet[2533]: I0120 00:51:32.097015 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-lib-modules\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097378 kubelet[2533]: I0120 00:51:32.097030 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-hostproc\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097378 kubelet[2533]: I0120 00:51:32.097054 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-cilium-config-path\") pod \"6e73917d-1e1e-42c7-90ef-dffc705f7dbf\" (UID: \"6e73917d-1e1e-42c7-90ef-dffc705f7dbf\") " Jan 20 00:51:32.097378 kubelet[2533]: I0120 00:51:32.097071 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cni-path\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097378 kubelet[2533]: I0120 00:51:32.097076 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-hostproc" (OuterVolumeSpecName: "hostproc") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.097378 kubelet[2533]: I0120 00:51:32.097089 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-kernel\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097508 kubelet[2533]: I0120 00:51:32.097176 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.097508 kubelet[2533]: I0120 00:51:32.097183 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e58220d-e169-4168-9f48-ed376e64edc6-clustermesh-secrets\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097508 kubelet[2533]: I0120 00:51:32.097212 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-cgroup\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097508 kubelet[2533]: I0120 00:51:32.097231 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-config-path\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097508 kubelet[2533]: I0120 00:51:32.097250 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-run\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097508 kubelet[2533]: I0120 00:51:32.097265 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-bpf-maps\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097633 kubelet[2533]: I0120 00:51:32.097320 2533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-etc-cni-netd\") pod \"2e58220d-e169-4168-9f48-ed376e64edc6\" (UID: \"2e58220d-e169-4168-9f48-ed376e64edc6\") " Jan 20 00:51:32.097633 kubelet[2533]: I0120 00:51:32.097359 2533 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.097633 kubelet[2533]: I0120 00:51:32.097370 2533 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.097633 kubelet[2533]: I0120 00:51:32.097381 2533 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.097633 kubelet[2533]: I0120 00:51:32.097398 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.097967 kubelet[2533]: I0120 00:51:32.097759 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.102759 kubelet[2533]: I0120 00:51:32.102632 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.104945 kubelet[2533]: I0120 00:51:32.104839 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.105067 kubelet[2533]: I0120 00:51:32.105053 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.105211 kubelet[2533]: I0120 00:51:32.105197 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cni-path" (OuterVolumeSpecName: "cni-path") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.105409 kubelet[2533]: I0120 00:51:32.105337 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:51:32.106667 kubelet[2533]: I0120 00:51:32.106629 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-kube-api-access-jq96n" (OuterVolumeSpecName: "kube-api-access-jq96n") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "kube-api-access-jq96n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:51:32.108134 kubelet[2533]: I0120 00:51:32.108023 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:51:32.108356 kubelet[2533]: I0120 00:51:32.108325 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e73917d-1e1e-42c7-90ef-dffc705f7dbf" (UID: "6e73917d-1e1e-42c7-90ef-dffc705f7dbf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:51:32.108705 kubelet[2533]: I0120 00:51:32.108653 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e58220d-e169-4168-9f48-ed376e64edc6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:51:32.109143 kubelet[2533]: I0120 00:51:32.109053 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-kube-api-access-rdnm7" (OuterVolumeSpecName: "kube-api-access-rdnm7") pod "6e73917d-1e1e-42c7-90ef-dffc705f7dbf" (UID: "6e73917d-1e1e-42c7-90ef-dffc705f7dbf"). InnerVolumeSpecName "kube-api-access-rdnm7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:51:32.109452 kubelet[2533]: I0120 00:51:32.109425 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2e58220d-e169-4168-9f48-ed376e64edc6" (UID: "2e58220d-e169-4168-9f48-ed376e64edc6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197737 2533 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197778 2533 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197789 2533 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e58220d-e169-4168-9f48-ed376e64edc6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197797 2533 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197807 2533 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197815 2533 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197822 2533 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.197839 kubelet[2533]: I0120 00:51:32.197829 2533 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.198293 kubelet[2533]: I0120 00:51:32.197836 2533 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.198293 kubelet[2533]: I0120 00:51:32.197844 2533 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdnm7\" (UniqueName: \"kubernetes.io/projected/6e73917d-1e1e-42c7-90ef-dffc705f7dbf-kube-api-access-rdnm7\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.198293 kubelet[2533]: I0120 00:51:32.197851 2533 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq96n\" (UniqueName: \"kubernetes.io/projected/2e58220d-e169-4168-9f48-ed376e64edc6-kube-api-access-jq96n\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.198293 kubelet[2533]: I0120 00:51:32.197891 2533 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.198293 kubelet[2533]: I0120 00:51:32.197900 2533 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e58220d-e169-4168-9f48-ed376e64edc6-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 00:51:32.275223 systemd[1]: Removed slice kubepods-burstable-pod2e58220d_e169_4168_9f48_ed376e64edc6.slice - libcontainer container kubepods-burstable-pod2e58220d_e169_4168_9f48_ed376e64edc6.slice. Jan 20 00:51:32.275326 systemd[1]: kubepods-burstable-pod2e58220d_e169_4168_9f48_ed376e64edc6.slice: Consumed 11.042s CPU time. Jan 20 00:51:32.276519 systemd[1]: Removed slice kubepods-besteffort-pod6e73917d_1e1e_42c7_90ef_dffc705f7dbf.slice - libcontainer container kubepods-besteffort-pod6e73917d_1e1e_42c7_90ef_dffc705f7dbf.slice. Jan 20 00:51:32.798669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b-rootfs.mount: Deactivated successfully. Jan 20 00:51:32.798841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c-rootfs.mount: Deactivated successfully. Jan 20 00:51:32.798998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c-shm.mount: Deactivated successfully. Jan 20 00:51:32.799094 systemd[1]: var-lib-kubelet-pods-6e73917d\x2d1e1e\x2d42c7\x2d90ef\x2ddffc705f7dbf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drdnm7.mount: Deactivated successfully. Jan 20 00:51:32.799287 systemd[1]: var-lib-kubelet-pods-2e58220d\x2de169\x2d4168\x2d9f48\x2ded376e64edc6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djq96n.mount: Deactivated successfully. Jan 20 00:51:32.799392 systemd[1]: var-lib-kubelet-pods-2e58220d\x2de169\x2d4168\x2d9f48\x2ded376e64edc6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 00:51:32.799492 systemd[1]: var-lib-kubelet-pods-2e58220d\x2de169\x2d4168\x2d9f48\x2ded376e64edc6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 00:51:32.926425 kubelet[2533]: I0120 00:51:32.926396 2533 scope.go:117] "RemoveContainer" containerID="2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c" Jan 20 00:51:32.929584 containerd[1464]: time="2026-01-20T00:51:32.929535112Z" level=info msg="RemoveContainer for \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\"" Jan 20 00:51:32.937455 containerd[1464]: time="2026-01-20T00:51:32.937374649Z" level=info msg="RemoveContainer for \"2cee42f53ba8b8a6e880cef0707646b406ee22eb888f2cffbe4696d6b73cae2c\" returns successfully" Jan 20 00:51:32.937746 kubelet[2533]: I0120 00:51:32.937706 2533 scope.go:117] "RemoveContainer" containerID="4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4" Jan 20 00:51:32.939643 containerd[1464]: time="2026-01-20T00:51:32.939612768Z" level=info msg="RemoveContainer for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\"" Jan 20 00:51:32.943741 containerd[1464]: time="2026-01-20T00:51:32.943648575Z" level=info msg="RemoveContainer for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" returns successfully" Jan 20 00:51:32.944023 kubelet[2533]: I0120 00:51:32.943888 2533 scope.go:117] "RemoveContainer" containerID="181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df" Jan 20 00:51:32.945335 containerd[1464]: time="2026-01-20T00:51:32.944998545Z" level=info msg="RemoveContainer for \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\"" Jan 20 00:51:32.949335 containerd[1464]: time="2026-01-20T00:51:32.949257722Z" level=info msg="RemoveContainer for \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\" returns successfully" Jan 20 00:51:32.949585 kubelet[2533]: I0120 00:51:32.949454 2533 scope.go:117] "RemoveContainer" containerID="1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a" Jan 20 00:51:32.950574 containerd[1464]: time="2026-01-20T00:51:32.950465480Z" level=info msg="RemoveContainer for \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\"" Jan 20 00:51:32.954708 containerd[1464]: time="2026-01-20T00:51:32.954644378Z" level=info msg="RemoveContainer for \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\" returns successfully" Jan 20 00:51:32.955323 kubelet[2533]: I0120 00:51:32.955246 2533 scope.go:117] "RemoveContainer" containerID="ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7" Jan 20 00:51:32.956332 containerd[1464]: time="2026-01-20T00:51:32.956278912Z" level=info msg="RemoveContainer for \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\"" Jan 20 00:51:32.960034 containerd[1464]: time="2026-01-20T00:51:32.959955031Z" level=info msg="RemoveContainer for \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\" returns successfully" Jan 20 00:51:32.960290 kubelet[2533]: I0120 00:51:32.960207 2533 scope.go:117] "RemoveContainer" containerID="959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff" Jan 20 00:51:32.961281 containerd[1464]: time="2026-01-20T00:51:32.961186840Z" level=info msg="RemoveContainer for \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\"" Jan 20 00:51:32.964766 containerd[1464]: time="2026-01-20T00:51:32.964735329Z" level=info msg="RemoveContainer for \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\" returns successfully" Jan 20 00:51:32.964954 kubelet[2533]: I0120 00:51:32.964929 2533 scope.go:117] "RemoveContainer" containerID="4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4" Jan 20 00:51:32.968150 containerd[1464]: time="2026-01-20T00:51:32.968049625Z" level=error msg="ContainerStatus for \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\": not found" Jan 20 00:51:32.968383 kubelet[2533]: E0120 00:51:32.968300 2533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\": not found" containerID="4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4" Jan 20 00:51:32.968383 kubelet[2533]: I0120 00:51:32.968338 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4"} err="failed to get container status \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f90b63972f0df29bc8c0754bd228a644b53f4ca137c8241fc0a483339a9e2f4\": not found" Jan 20 00:51:32.968383 kubelet[2533]: I0120 00:51:32.968379 2533 scope.go:117] "RemoveContainer" containerID="181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df" Jan 20 00:51:32.968774 containerd[1464]: time="2026-01-20T00:51:32.968641526Z" level=error msg="ContainerStatus for \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\": not found" Jan 20 00:51:32.968836 kubelet[2533]: E0120 00:51:32.968819 2533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\": not found" containerID="181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df" Jan 20 00:51:32.968902 kubelet[2533]: I0120 00:51:32.968839 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df"} err="failed to get container status \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\": rpc error: code = NotFound desc = an error occurred when try to find container \"181556bd676c655f938900941bfca887356f388405763b5b5691ffa33ffa27df\": not found" Jan 20 00:51:32.968902 kubelet[2533]: I0120 00:51:32.968852 2533 scope.go:117] "RemoveContainer" containerID="1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a" Jan 20 00:51:32.969177 containerd[1464]: time="2026-01-20T00:51:32.969055986Z" level=error msg="ContainerStatus for \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\": not found" Jan 20 00:51:32.969345 kubelet[2533]: E0120 00:51:32.969301 2533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\": not found" containerID="1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a" Jan 20 00:51:32.969345 kubelet[2533]: I0120 00:51:32.969337 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a"} err="failed to get container status \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1830bc4f3a12be92cfe568a8c5cf04afb40f0c40bd96b6e84377bd9541457a8a\": not found" Jan 20 00:51:32.969408 kubelet[2533]: I0120 00:51:32.969351 2533 scope.go:117] "RemoveContainer" containerID="ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7" Jan 20 00:51:32.969554 containerd[1464]: time="2026-01-20T00:51:32.969506820Z" level=error msg="ContainerStatus for \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\": not found" Jan 20 00:51:32.969686 kubelet[2533]: E0120 00:51:32.969635 2533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\": not found" containerID="ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7" Jan 20 00:51:32.969686 kubelet[2533]: I0120 00:51:32.969679 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7"} err="failed to get container status \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee77d6b80b0c8bbf232f2997798fe608b1d18fa199fbcd7d9f05ab799052bee7\": not found" Jan 20 00:51:32.969741 kubelet[2533]: I0120 00:51:32.969693 2533 scope.go:117] "RemoveContainer" containerID="959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff" Jan 20 00:51:32.969936 containerd[1464]: time="2026-01-20T00:51:32.969850361Z" level=error msg="ContainerStatus for \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\": not found" Jan 20 00:51:32.970049 kubelet[2533]: E0120 00:51:32.969998 2533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\": not found" containerID="959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff" Jan 20 00:51:32.970049 kubelet[2533]: I0120 00:51:32.970034 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff"} err="failed to get container status \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"959a27ff549212f987a0d1b2bb7f57cd5678ac800452373ede99feca950d99ff\": not found" Jan 20 00:51:33.268056 kubelet[2533]: E0120 00:51:33.267988 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:33.718561 sshd[4237]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:33.726421 systemd[1]: sshd@26-10.0.0.135:22-10.0.0.1:49518.service: Deactivated successfully. Jan 20 00:51:33.728548 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 00:51:33.730728 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. Jan 20 00:51:33.740580 systemd[1]: Started sshd@27-10.0.0.135:22-10.0.0.1:36486.service - OpenSSH per-connection server daemon (10.0.0.1:36486). Jan 20 00:51:33.742329 systemd-logind[1447]: Removed session 27. Jan 20 00:51:33.778411 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 36486 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:33.780188 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:33.787802 systemd-logind[1447]: New session 28 of user core. Jan 20 00:51:33.802372 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 00:51:34.251900 containerd[1464]: time="2026-01-20T00:51:34.251824837Z" level=info msg="StopPodSandbox for \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\"" Jan 20 00:51:34.252349 containerd[1464]: time="2026-01-20T00:51:34.251955092Z" level=info msg="TearDown network for sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" successfully" Jan 20 00:51:34.252349 containerd[1464]: time="2026-01-20T00:51:34.251966422Z" level=info msg="StopPodSandbox for \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" returns successfully" Jan 20 00:51:34.252530 containerd[1464]: time="2026-01-20T00:51:34.252501733Z" level=info msg="RemovePodSandbox for \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\"" Jan 20 00:51:34.252577 containerd[1464]: time="2026-01-20T00:51:34.252549542Z" level=info msg="Forcibly stopping sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\"" Jan 20 00:51:34.252634 containerd[1464]: time="2026-01-20T00:51:34.252609665Z" level=info msg="TearDown network for sandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" successfully" Jan 20 00:51:34.256636 containerd[1464]: time="2026-01-20T00:51:34.256575387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:51:34.256636 containerd[1464]: time="2026-01-20T00:51:34.256608819Z" level=info msg="RemovePodSandbox \"73ba1f9e7b08d5e950cc642696b8f9e675cba5948cf729bd191c44630f56172c\" returns successfully" Jan 20 00:51:34.256981 containerd[1464]: time="2026-01-20T00:51:34.256935913Z" level=info msg="StopPodSandbox for \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\"" Jan 20 00:51:34.257193 containerd[1464]: time="2026-01-20T00:51:34.257165873Z" level=info msg="TearDown network for sandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" successfully" Jan 20 00:51:34.257222 containerd[1464]: time="2026-01-20T00:51:34.257192793Z" level=info msg="StopPodSandbox for \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" returns successfully" Jan 20 00:51:34.257471 containerd[1464]: time="2026-01-20T00:51:34.257438052Z" level=info msg="RemovePodSandbox for \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\"" Jan 20 00:51:34.257534 containerd[1464]: time="2026-01-20T00:51:34.257457118Z" level=info msg="Forcibly stopping sandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\"" Jan 20 00:51:34.257581 containerd[1464]: time="2026-01-20T00:51:34.257554750Z" level=info msg="TearDown network for sandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" successfully" Jan 20 00:51:34.261417 containerd[1464]: time="2026-01-20T00:51:34.261379060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:51:34.261417 containerd[1464]: time="2026-01-20T00:51:34.261409196Z" level=info msg="RemovePodSandbox \"e79d73380d8b41d4468fe88f62fa43916c080a65cce803fd91e6a20dbbd0648b\" returns successfully" Jan 20 00:51:34.268998 kubelet[2533]: I0120 00:51:34.268918 2533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e58220d-e169-4168-9f48-ed376e64edc6" path="/var/lib/kubelet/pods/2e58220d-e169-4168-9f48-ed376e64edc6/volumes" Jan 20 00:51:34.269817 kubelet[2533]: I0120 00:51:34.269766 2533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e73917d-1e1e-42c7-90ef-dffc705f7dbf" path="/var/lib/kubelet/pods/6e73917d-1e1e-42c7-90ef-dffc705f7dbf/volumes" Jan 20 00:51:34.357941 kubelet[2533]: E0120 00:51:34.357812 2533 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:51:34.406429 sshd[4396]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:34.420244 systemd[1]: sshd@27-10.0.0.135:22-10.0.0.1:36486.service: Deactivated successfully. Jan 20 00:51:34.425461 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 00:51:34.429369 systemd-logind[1447]: Session 28 logged out. Waiting for processes to exit. Jan 20 00:51:34.449435 systemd[1]: Started sshd@28-10.0.0.135:22-10.0.0.1:36502.service - OpenSSH per-connection server daemon (10.0.0.1:36502). Jan 20 00:51:34.454858 systemd-logind[1447]: Removed session 28. Jan 20 00:51:34.463929 systemd[1]: Created slice kubepods-burstable-pod1b2b0bd3_ea0c_49b2_8e76_5afa8cc8315f.slice - libcontainer container kubepods-burstable-pod1b2b0bd3_ea0c_49b2_8e76_5afa8cc8315f.slice. Jan 20 00:51:34.497437 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 36502 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:34.499091 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:34.504499 systemd-logind[1447]: New session 29 of user core. Jan 20 00:51:34.514264 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 00:51:34.566546 sshd[4415]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:34.581418 systemd[1]: sshd@28-10.0.0.135:22-10.0.0.1:36502.service: Deactivated successfully. Jan 20 00:51:34.583994 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 00:51:34.586356 systemd-logind[1447]: Session 29 logged out. Waiting for processes to exit. Jan 20 00:51:34.596419 systemd[1]: Started sshd@29-10.0.0.135:22-10.0.0.1:36518.service - OpenSSH per-connection server daemon (10.0.0.1:36518). Jan 20 00:51:34.597906 systemd-logind[1447]: Removed session 29. Jan 20 00:51:34.613806 kubelet[2533]: I0120 00:51:34.613701 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-host-proc-sys-net\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.613806 kubelet[2533]: I0120 00:51:34.613775 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-cilium-config-path\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.613929 kubelet[2533]: I0120 00:51:34.613809 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-hostproc\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.613929 kubelet[2533]: I0120 00:51:34.613831 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-cilium-cgroup\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.613929 kubelet[2533]: I0120 00:51:34.613851 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-cni-path\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614019 kubelet[2533]: I0120 00:51:34.613954 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dfjd\" (UniqueName: \"kubernetes.io/projected/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-kube-api-access-8dfjd\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614019 kubelet[2533]: I0120 00:51:34.613985 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-cilium-run\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614019 kubelet[2533]: I0120 00:51:34.614003 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-xtables-lock\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614019 kubelet[2533]: I0120 00:51:34.614018 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-bpf-maps\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614154 kubelet[2533]: I0120 00:51:34.614031 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-hubble-tls\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614154 kubelet[2533]: I0120 00:51:34.614043 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-clustermesh-secrets\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614154 kubelet[2533]: I0120 00:51:34.614055 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-host-proc-sys-kernel\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614154 kubelet[2533]: I0120 00:51:34.614068 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-etc-cni-netd\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614237 kubelet[2533]: I0120 00:51:34.614094 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-cilium-ipsec-secrets\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.614360 kubelet[2533]: I0120 00:51:34.614318 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f-lib-modules\") pod \"cilium-vc8x5\" (UID: \"1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f\") " pod="kube-system/cilium-vc8x5" Jan 20 00:51:34.629234 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 36518 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:51:34.630644 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:34.635554 systemd-logind[1447]: New session 30 of user core. Jan 20 00:51:34.644378 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 00:51:34.771280 kubelet[2533]: E0120 00:51:34.771029 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:34.771860 containerd[1464]: time="2026-01-20T00:51:34.771771319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vc8x5,Uid:1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f,Namespace:kube-system,Attempt:0,}" Jan 20 00:51:34.798663 containerd[1464]: time="2026-01-20T00:51:34.797191133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:51:34.799088 containerd[1464]: time="2026-01-20T00:51:34.798628051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:51:34.799088 containerd[1464]: time="2026-01-20T00:51:34.798823126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:51:34.799088 containerd[1464]: time="2026-01-20T00:51:34.799015005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:51:34.823279 systemd[1]: Started cri-containerd-5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690.scope - libcontainer container 5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690. Jan 20 00:51:34.853309 containerd[1464]: time="2026-01-20T00:51:34.853253423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vc8x5,Uid:1b2b0bd3-ea0c-49b2-8e76-5afa8cc8315f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\"" Jan 20 00:51:34.854049 kubelet[2533]: E0120 00:51:34.853935 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:34.861142 containerd[1464]: time="2026-01-20T00:51:34.861011665Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:51:34.873594 containerd[1464]: time="2026-01-20T00:51:34.873553267Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c\"" Jan 20 00:51:34.874307 containerd[1464]: time="2026-01-20T00:51:34.874261116Z" level=info msg="StartContainer for \"66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c\"" Jan 20 00:51:34.912292 systemd[1]: Started cri-containerd-66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c.scope - libcontainer container 66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c. Jan 20 00:51:34.953087 containerd[1464]: time="2026-01-20T00:51:34.952976084Z" level=info msg="StartContainer for \"66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c\" returns successfully" Jan 20 00:51:34.965167 systemd[1]: cri-containerd-66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c.scope: Deactivated successfully. Jan 20 00:51:35.005540 containerd[1464]: time="2026-01-20T00:51:35.005482601Z" level=info msg="shim disconnected" id=66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c namespace=k8s.io Jan 20 00:51:35.005540 containerd[1464]: time="2026-01-20T00:51:35.005536673Z" level=warning msg="cleaning up after shim disconnected" id=66c872df25e3efad2f6c2730837cfb3c2af57a36fa7742db7fac03eb0737088c namespace=k8s.io Jan 20 00:51:35.005540 containerd[1464]: time="2026-01-20T00:51:35.005548845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:35.948410 kubelet[2533]: E0120 00:51:35.948315 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:35.956445 containerd[1464]: time="2026-01-20T00:51:35.956328493Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:51:35.971731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378857185.mount: Deactivated successfully. Jan 20 00:51:35.973307 containerd[1464]: time="2026-01-20T00:51:35.973261209Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31\"" Jan 20 00:51:35.974227 containerd[1464]: time="2026-01-20T00:51:35.974181213Z" level=info msg="StartContainer for \"4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31\"" Jan 20 00:51:36.019317 systemd[1]: Started cri-containerd-4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31.scope - libcontainer container 4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31. Jan 20 00:51:36.053361 containerd[1464]: time="2026-01-20T00:51:36.053265073Z" level=info msg="StartContainer for \"4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31\" returns successfully" Jan 20 00:51:36.061354 systemd[1]: cri-containerd-4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31.scope: Deactivated successfully. Jan 20 00:51:36.097930 containerd[1464]: time="2026-01-20T00:51:36.097799573Z" level=info msg="shim disconnected" id=4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31 namespace=k8s.io Jan 20 00:51:36.097930 containerd[1464]: time="2026-01-20T00:51:36.097865937Z" level=warning msg="cleaning up after shim disconnected" id=4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31 namespace=k8s.io Jan 20 00:51:36.097930 containerd[1464]: time="2026-01-20T00:51:36.097915951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:36.728161 systemd[1]: run-containerd-runc-k8s.io-4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31-runc.QyyJFw.mount: Deactivated successfully. Jan 20 00:51:36.728289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4332a049954edd78bcfb34096dfbe3aa6b6cc2b80bfd2e11fbc1ca0978a26f31-rootfs.mount: Deactivated successfully. Jan 20 00:51:36.953714 kubelet[2533]: E0120 00:51:36.953524 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:36.959950 kubelet[2533]: I0120 00:51:36.959860 2533 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T00:51:36Z","lastTransitionTime":"2026-01-20T00:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 00:51:36.962671 containerd[1464]: time="2026-01-20T00:51:36.962594382Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:51:36.988469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157268733.mount: Deactivated successfully. Jan 20 00:51:36.991265 containerd[1464]: time="2026-01-20T00:51:36.991041957Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f\"" Jan 20 00:51:36.992644 containerd[1464]: time="2026-01-20T00:51:36.992612274Z" level=info msg="StartContainer for \"cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f\"" Jan 20 00:51:37.036355 systemd[1]: Started cri-containerd-cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f.scope - libcontainer container cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f. Jan 20 00:51:37.068309 containerd[1464]: time="2026-01-20T00:51:37.068217594Z" level=info msg="StartContainer for \"cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f\" returns successfully" Jan 20 00:51:37.069793 systemd[1]: cri-containerd-cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f.scope: Deactivated successfully. Jan 20 00:51:37.103471 containerd[1464]: time="2026-01-20T00:51:37.103389923Z" level=info msg="shim disconnected" id=cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f namespace=k8s.io Jan 20 00:51:37.103471 containerd[1464]: time="2026-01-20T00:51:37.103446298Z" level=warning msg="cleaning up after shim disconnected" id=cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f namespace=k8s.io Jan 20 00:51:37.103471 containerd[1464]: time="2026-01-20T00:51:37.103454944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:37.728618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cab574d8cf7efe6f0f357a88a78c418679f8508189429e207799d3b84bd1186f-rootfs.mount: Deactivated successfully. Jan 20 00:51:37.958378 kubelet[2533]: E0120 00:51:37.958325 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:37.963794 containerd[1464]: time="2026-01-20T00:51:37.963725154Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:51:37.981410 containerd[1464]: time="2026-01-20T00:51:37.981306797Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6\"" Jan 20 00:51:37.982464 containerd[1464]: time="2026-01-20T00:51:37.982408175Z" level=info msg="StartContainer for \"2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6\"" Jan 20 00:51:38.029331 systemd[1]: Started cri-containerd-2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6.scope - libcontainer container 2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6. Jan 20 00:51:38.062340 systemd[1]: cri-containerd-2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6.scope: Deactivated successfully. Jan 20 00:51:38.064126 containerd[1464]: time="2026-01-20T00:51:38.064035692Z" level=info msg="StartContainer for \"2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6\" returns successfully" Jan 20 00:51:38.094251 containerd[1464]: time="2026-01-20T00:51:38.093778928Z" level=info msg="shim disconnected" id=2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6 namespace=k8s.io Jan 20 00:51:38.094251 containerd[1464]: time="2026-01-20T00:51:38.093851413Z" level=warning msg="cleaning up after shim disconnected" id=2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6 namespace=k8s.io Jan 20 00:51:38.094251 containerd[1464]: time="2026-01-20T00:51:38.093866552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:38.111539 containerd[1464]: time="2026-01-20T00:51:38.111462950Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:51:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:51:38.729061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ebde00059bd20181c260021182e248cd7a47aa2feb28f494aeaef4885bb84a6-rootfs.mount: Deactivated successfully. Jan 20 00:51:38.963936 kubelet[2533]: E0120 00:51:38.963817 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:38.973741 containerd[1464]: time="2026-01-20T00:51:38.973662693Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:51:38.995002 containerd[1464]: time="2026-01-20T00:51:38.994826345Z" level=info msg="CreateContainer within sandbox \"5b62541d85c9c2218e6ed88986e85505e7861e5d2d6026b821a7d63b2b48a690\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76f1222ec76cddf0978dd7084ef8835dd88a6552124f0ca479f0e10f46268b6b\"" Jan 20 00:51:38.995733 containerd[1464]: time="2026-01-20T00:51:38.995641676Z" level=info msg="StartContainer for \"76f1222ec76cddf0978dd7084ef8835dd88a6552124f0ca479f0e10f46268b6b\"" Jan 20 00:51:39.031297 systemd[1]: Started cri-containerd-76f1222ec76cddf0978dd7084ef8835dd88a6552124f0ca479f0e10f46268b6b.scope - libcontainer container 76f1222ec76cddf0978dd7084ef8835dd88a6552124f0ca479f0e10f46268b6b. Jan 20 00:51:39.072481 containerd[1464]: time="2026-01-20T00:51:39.072346039Z" level=info msg="StartContainer for \"76f1222ec76cddf0978dd7084ef8835dd88a6552124f0ca479f0e10f46268b6b\" returns successfully" Jan 20 00:51:39.518196 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 20 00:51:39.970197 kubelet[2533]: E0120 00:51:39.969941 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:39.989649 kubelet[2533]: I0120 00:51:39.989582 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vc8x5" podStartSLOduration=5.989565848 podStartE2EDuration="5.989565848s" podCreationTimestamp="2026-01-20 00:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:51:39.987956528 +0000 UTC m=+125.857744371" watchObservedRunningTime="2026-01-20 00:51:39.989565848 +0000 UTC m=+125.859353680" Jan 20 00:51:40.972853 kubelet[2533]: E0120 00:51:40.972707 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:41.974841 kubelet[2533]: E0120 00:51:41.974802 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:42.967039 systemd-networkd[1392]: lxc_health: Link UP Jan 20 00:51:42.973439 systemd-networkd[1392]: lxc_health: Gained carrier Jan 20 00:51:43.152879 systemd[1]: run-containerd-runc-k8s.io-76f1222ec76cddf0978dd7084ef8835dd88a6552124f0ca479f0e10f46268b6b-runc.ecLfdN.mount: Deactivated successfully. Jan 20 00:51:44.648385 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jan 20 00:51:44.773730 kubelet[2533]: E0120 00:51:44.773453 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:44.980851 kubelet[2533]: E0120 00:51:44.980643 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:45.983339 kubelet[2533]: E0120 00:51:45.983264 2533 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:49.657266 sshd[4423]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:49.662726 systemd[1]: sshd@29-10.0.0.135:22-10.0.0.1:36518.service: Deactivated successfully. Jan 20 00:51:49.665893 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 00:51:49.667296 systemd-logind[1447]: Session 30 logged out. Waiting for processes to exit. Jan 20 00:51:49.669087 systemd-logind[1447]: Removed session 30.