Jan 28 01:18:49.703214 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 01:18:49.703251 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:18:49.703267 kernel: BIOS-provided physical RAM map: Jan 28 01:18:49.703275 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 01:18:49.703283 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 01:18:49.703292 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 01:18:49.703303 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 01:18:49.703311 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 01:18:49.703320 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:18:49.703331 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 01:18:49.703339 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:18:49.703349 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 01:18:49.703358 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:18:49.703367 kernel: NX (Execute Disable) protection: active Jan 28 01:18:49.703377 kernel: APIC: Static calls initialized Jan 28 01:18:49.703390 kernel: SMBIOS 2.8 present. Jan 28 01:18:49.703399 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 01:18:49.703408 kernel: Hypervisor detected: KVM Jan 28 01:18:49.703419 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:18:49.703429 kernel: kvm-clock: using sched offset of 6577204553 cycles Jan 28 01:18:49.703439 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:18:49.703448 kernel: tsc: Detected 2445.424 MHz processor Jan 28 01:18:49.703458 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:18:49.703467 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:18:49.703480 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:18:49.703489 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 01:18:49.703500 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:18:49.703509 kernel: Using GB pages for direct mapping Jan 28 01:18:49.703557 kernel: ACPI: Early table checksum verification disabled Jan 28 01:18:49.703566 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 01:18:49.703575 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703584 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703593 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703607 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 01:18:49.703617 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703627 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703638 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703648 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:18:49.703658 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 01:18:49.703670 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 01:18:49.703686 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 01:18:49.703698 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 01:18:49.703708 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 01:18:49.703718 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 01:18:49.703728 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 01:18:49.703738 kernel: No NUMA configuration found Jan 28 01:18:49.703748 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 01:18:49.703761 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 28 01:18:49.703771 kernel: Zone ranges: Jan 28 01:18:49.703780 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:18:49.703790 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 01:18:49.703800 kernel: Normal empty Jan 28 01:18:49.703809 kernel: Movable zone start for each node Jan 28 01:18:49.703859 kernel: Early memory node ranges Jan 28 01:18:49.703871 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 01:18:49.703882 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 01:18:49.703892 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 01:18:49.703906 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:18:49.703915 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 01:18:49.703924 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 01:18:49.703935 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:18:49.703945 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:18:49.703954 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:18:49.703964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:18:49.703974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:18:49.703984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:18:49.703997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:18:49.704009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:18:49.704018 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:18:49.704028 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:18:49.704037 kernel: TSC deadline timer available Jan 28 01:18:49.704047 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 01:18:49.704056 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:18:49.704066 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:18:49.704077 kernel: kvm-guest: setup PV sched yield Jan 28 01:18:49.704090 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 01:18:49.704099 kernel: Booting paravirtualized kernel on KVM Jan 28 01:18:49.704109 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:18:49.704119 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:18:49.704129 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 01:18:49.704138 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 01:18:49.704149 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:18:49.704158 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:18:49.704168 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:18:49.704183 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:18:49.704193 kernel: random: crng init done Jan 28 01:18:49.704205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:18:49.704215 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:18:49.704225 kernel: Fallback order for Node 0: 0 Jan 28 01:18:49.704234 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 28 01:18:49.704243 kernel: Policy zone: DMA32 Jan 28 01:18:49.704252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:18:49.704270 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 28 01:18:49.704280 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:18:49.704289 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 01:18:49.704299 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 01:18:49.704308 kernel: Dynamic Preempt: voluntary Jan 28 01:18:49.704318 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:18:49.704329 kernel: rcu: RCU event tracing is enabled. Jan 28 01:18:49.704339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:18:49.704349 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:18:49.704363 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:18:49.704372 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:18:49.704382 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:18:49.704392 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:18:49.704401 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:18:49.704411 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:18:49.704421 kernel: Console: colour VGA+ 80x25 Jan 28 01:18:49.704431 kernel: printk: console [ttyS0] enabled Jan 28 01:18:49.704441 kernel: ACPI: Core revision 20230628 Jan 28 01:18:49.704454 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:18:49.704464 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:18:49.704477 kernel: x2apic enabled Jan 28 01:18:49.704486 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:18:49.704496 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:18:49.704505 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:18:49.708372 kernel: kvm-guest: setup PV IPIs Jan 28 01:18:49.708393 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:18:49.708430 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 01:18:49.708443 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 28 01:18:49.708455 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:18:49.708467 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:18:49.708481 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:18:49.708492 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:18:49.708502 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:18:49.708549 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:18:49.708565 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:18:49.708584 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:18:49.708598 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:18:49.708611 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:18:49.708622 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:18:49.708632 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:18:49.708644 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:18:49.708657 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:18:49.708669 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:18:49.708684 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:18:49.708694 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:18:49.708703 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:18:49.708713 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:18:49.708725 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:18:49.708736 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:18:49.708748 kernel: landlock: Up and running. Jan 28 01:18:49.708761 kernel: SELinux: Initializing. Jan 28 01:18:49.708772 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:18:49.708786 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:18:49.708796 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:18:49.708806 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:18:49.708866 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:18:49.708880 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:18:49.708892 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:18:49.708901 kernel: signal: max sigframe size: 1776 Jan 28 01:18:49.708911 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:18:49.708922 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:18:49.708938 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:18:49.708950 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:18:49.708961 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:18:49.708974 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:18:49.708987 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:18:49.708999 kernel: smpboot: Max logical packages: 1 Jan 28 01:18:49.709011 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 28 01:18:49.709024 kernel: devtmpfs: initialized Jan 28 01:18:49.709037 kernel: x86/mm: Memory block size: 128MB Jan 28 01:18:49.709057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:18:49.709071 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:18:49.709085 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:18:49.709099 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:18:49.709113 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:18:49.709126 kernel: audit: type=2000 audit(1769563123.789:1): state=initialized audit_enabled=0 res=1 Jan 28 01:18:49.709140 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:18:49.709154 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:18:49.709167 kernel: cpuidle: using governor menu Jan 28 01:18:49.709185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:18:49.709198 kernel: dca service started, version 1.12.1 Jan 28 01:18:49.709211 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 01:18:49.709225 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:18:49.709238 kernel: PCI: Using configuration type 1 for base access Jan 28 01:18:49.709250 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:18:49.709262 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:18:49.709275 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:18:49.709287 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:18:49.709305 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:18:49.709317 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:18:49.709331 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:18:49.709345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:18:49.709357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:18:49.709369 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 01:18:49.709379 kernel: ACPI: Interpreter enabled Jan 28 01:18:49.709389 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:18:49.709398 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:18:49.709413 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:18:49.709426 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:18:49.709439 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:18:49.709451 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:18:49.709916 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:18:49.710171 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:18:49.710360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:18:49.710382 kernel: PCI host bridge to bus 0000:00 Jan 28 01:18:49.710616 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:18:49.710800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:18:49.711172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:18:49.719319 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:18:49.719585 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:18:49.719806 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 01:18:49.720102 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:18:49.720355 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 01:18:49.720616 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 01:18:49.720882 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 28 01:18:49.721113 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 28 01:18:49.721337 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 28 01:18:49.721591 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:18:49.721891 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 01:18:49.722100 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 01:18:49.722300 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 28 01:18:49.727654 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 01:18:49.727975 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 01:18:49.728203 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 01:18:49.728444 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 28 01:18:49.728702 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 01:18:49.729008 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 01:18:49.729235 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 28 01:18:49.729454 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 28 01:18:49.729707 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 01:18:49.729981 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 28 01:18:49.730225 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 01:18:49.730445 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:18:49.733855 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 01:18:49.734091 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 28 01:18:49.734307 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 28 01:18:49.734565 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 01:18:49.734756 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 01:18:49.734783 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:18:49.734797 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:18:49.734807 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:18:49.734865 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:18:49.734876 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:18:49.734888 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:18:49.734902 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:18:49.734912 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:18:49.734927 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:18:49.734938 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:18:49.734949 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:18:49.734963 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:18:49.734973 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:18:49.734983 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:18:49.734993 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:18:49.735004 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:18:49.735016 kernel: iommu: Default domain type: Translated Jan 28 01:18:49.735034 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:18:49.735046 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:18:49.735058 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:18:49.735072 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 01:18:49.735085 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 01:18:49.735308 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:18:49.735564 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:18:49.735789 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:18:49.735809 kernel: vgaarb: loaded Jan 28 01:18:49.735882 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:18:49.735896 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:18:49.735908 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:18:49.735922 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:18:49.735934 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:18:49.735947 kernel: pnp: PnP ACPI init Jan 28 01:18:49.737737 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:18:49.737758 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:18:49.737776 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:18:49.737787 kernel: NET: Registered PF_INET protocol family Jan 28 01:18:49.737798 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:18:49.737809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:18:49.737866 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:18:49.737877 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:18:49.737888 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:18:49.737898 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:18:49.737909 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:18:49.738027 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:18:49.738058 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:18:49.738069 kernel: NET: Registered PF_XDP protocol family Jan 28 01:18:49.738242 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:18:49.738711 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:18:49.738925 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:18:49.739081 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:18:49.739238 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:18:49.739396 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 01:18:49.739544 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:18:49.739556 kernel: Initialise system trusted keyrings Jan 28 01:18:49.739567 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:18:49.739578 kernel: Key type asymmetric registered Jan 28 01:18:49.739588 kernel: Asymmetric key parser 'x509' registered Jan 28 01:18:49.739599 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 01:18:49.739609 kernel: io scheduler mq-deadline registered Jan 28 01:18:49.739620 kernel: io scheduler kyber registered Jan 28 01:18:49.739635 kernel: io scheduler bfq registered Jan 28 01:18:49.739645 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:18:49.739657 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:18:49.739668 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:18:49.739679 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:18:49.739689 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:18:49.739700 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:18:49.739711 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:18:49.739722 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:18:49.739735 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:18:49.741134 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:18:49.741152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:18:49.741311 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:18:49.741324 kernel: hpet: Lost 1 RTC interrupts Jan 28 01:18:49.742666 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:18:47 UTC (1769563127) Jan 28 01:18:49.742884 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:18:49.742902 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:18:49.742918 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:18:49.742929 kernel: Segment Routing with IPv6 Jan 28 01:18:49.742940 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:18:49.742951 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:18:49.742961 kernel: Key type dns_resolver registered Jan 28 01:18:49.742972 kernel: IPI shorthand broadcast: enabled Jan 28 01:18:49.742982 kernel: sched_clock: Marking stable (3161020224, 876836871)->(5233861695, -1196004600) Jan 28 01:18:49.742993 kernel: registered taskstats version 1 Jan 28 01:18:49.743004 kernel: Loading compiled-in X.509 certificates Jan 28 01:18:49.743018 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 01:18:49.743049 kernel: Key type .fscrypt registered Jan 28 01:18:49.743059 kernel: Key type fscrypt-provisioning registered Jan 28 01:18:49.743070 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:18:49.743080 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:18:49.743091 kernel: ima: No architecture policies found Jan 28 01:18:49.743102 kernel: clk: Disabling unused clocks Jan 28 01:18:49.743112 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 01:18:49.743123 kernel: Write protecting the kernel read-only data: 36864k Jan 28 01:18:49.743137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 01:18:49.743148 kernel: Run /init as init process Jan 28 01:18:49.743158 kernel: with arguments: Jan 28 01:18:49.743168 kernel: /init Jan 28 01:18:49.743179 kernel: with environment: Jan 28 01:18:49.743189 kernel: HOME=/ Jan 28 01:18:49.743199 kernel: TERM=linux Jan 28 01:18:49.743213 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:18:49.743229 systemd[1]: Detected virtualization kvm. Jan 28 01:18:49.743241 systemd[1]: Detected architecture x86-64. Jan 28 01:18:49.743252 systemd[1]: Running in initrd. Jan 28 01:18:49.743263 systemd[1]: No hostname configured, using default hostname. Jan 28 01:18:49.743273 systemd[1]: Hostname set to . Jan 28 01:18:49.743285 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:18:49.743296 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:18:49.743307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:18:49.743322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:18:49.743334 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:18:49.743346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:18:49.743357 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:18:49.743369 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:18:49.743382 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:18:49.743393 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:18:49.743407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:18:49.743419 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:18:49.743430 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:18:49.743441 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:18:49.743468 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:18:49.743482 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:18:49.743497 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:18:49.743509 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:18:49.746089 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:18:49.746104 kernel: hrtimer: interrupt took 13089693 ns Jan 28 01:18:49.746116 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:18:49.746129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:18:49.746141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:18:49.746153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:18:49.746166 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:18:49.746184 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:18:49.746196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:18:49.746208 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:18:49.746220 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:18:49.746232 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:18:49.746244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:18:49.746290 systemd-journald[195]: Collecting audit messages is disabled. Jan 28 01:18:49.746322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:18:49.746334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:18:49.746350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:18:49.746362 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:18:49.746378 systemd-journald[195]: Journal started Jan 28 01:18:49.746403 systemd-journald[195]: Runtime Journal (/run/log/journal/9fa51fe2f41c4cb89a88f9c5674590aa) is 6.0M, max 48.4M, 42.3M free. Jan 28 01:18:49.805118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:18:49.782575 systemd-modules-load[196]: Inserted module 'overlay' Jan 28 01:18:50.092933 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:18:50.092989 kernel: Bridge firewalling registered Jan 28 01:18:50.093012 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:18:49.925238 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 28 01:18:50.091450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:18:50.091987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:18:50.141137 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:18:50.150062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:18:50.156449 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:18:50.214085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:18:50.245957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:18:50.283588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:18:50.308711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:18:50.339679 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:18:50.398086 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:18:50.411604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:18:50.427044 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:18:50.500580 systemd-resolved[226]: Positive Trust Anchors: Jan 28 01:18:50.502085 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:18:50.502150 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:18:50.535992 dracut-cmdline[233]: dracut-dracut-053 Jan 28 01:18:50.508023 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 28 01:18:50.515504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:18:50.634312 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:18:50.523727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:18:51.021577 kernel: SCSI subsystem initialized Jan 28 01:18:51.034883 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:18:51.057133 kernel: iscsi: registered transport (tcp) Jan 28 01:18:51.108589 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:18:51.108668 kernel: QLogic iSCSI HBA Driver Jan 28 01:18:51.226585 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:18:51.244066 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:18:51.343708 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:18:51.343800 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:18:51.348765 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:18:51.441584 kernel: raid6: avx2x4 gen() 18140 MB/s Jan 28 01:18:51.458595 kernel: raid6: avx2x2 gen() 21140 MB/s Jan 28 01:18:51.485428 kernel: raid6: avx2x1 gen() 12188 MB/s Jan 28 01:18:51.485563 kernel: raid6: using algorithm avx2x2 gen() 21140 MB/s Jan 28 01:18:51.509624 kernel: raid6: .... xor() 18901 MB/s, rmw enabled Jan 28 01:18:51.510458 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:18:51.551947 kernel: xor: automatically using best checksumming function avx Jan 28 01:18:51.931785 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:18:51.977028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:18:51.996043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:18:52.036734 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 28 01:18:52.047642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:18:52.097312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:18:52.142484 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jan 28 01:18:52.282688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:18:52.319059 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:18:52.546382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:18:52.587865 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:18:52.646079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:18:52.679873 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:18:52.691474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:18:52.706920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:18:52.721896 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:18:52.727118 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:18:52.733144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:18:52.733694 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:18:52.772780 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:18:52.773506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:18:52.773795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:18:52.785688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:18:52.812470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:18:52.825336 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:18:52.824315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:18:52.850872 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 01:18:52.873162 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:18:52.873240 kernel: GPT:9289727 != 19775487 Jan 28 01:18:52.873257 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:18:52.875899 kernel: GPT:9289727 != 19775487 Jan 28 01:18:52.879699 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:18:52.879740 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:18:52.904591 kernel: libata version 3.00 loaded. Jan 28 01:18:52.923860 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:18:52.925788 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:18:52.925858 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 01:18:52.926136 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:18:52.935396 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 01:18:52.935443 kernel: AES CTR mode by8 optimization enabled Jan 28 01:18:52.935909 kernel: scsi host0: ahci Jan 28 01:18:52.936993 kernel: scsi host1: ahci Jan 28 01:18:52.937928 kernel: scsi host2: ahci Jan 28 01:18:52.938173 kernel: scsi host3: ahci Jan 28 01:18:52.938514 kernel: scsi host4: ahci Jan 28 01:18:52.942993 kernel: scsi host5: ahci Jan 28 01:18:52.944716 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 28 01:18:52.944880 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 28 01:18:52.944906 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 28 01:18:52.945030 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 28 01:18:52.945063 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 28 01:18:52.945180 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 28 01:18:52.973874 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Jan 28 01:18:52.987759 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:18:53.206730 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (466) Jan 28 01:18:53.206333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:18:53.244966 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:18:53.263487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:18:53.288127 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:18:53.288168 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:18:53.294706 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:18:53.294755 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:18:53.294670 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 01:18:53.344398 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:18:53.344440 kernel: ata3.00: applying bridge limits Jan 28 01:18:53.344461 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:18:53.344490 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:18:53.344507 kernel: ata3.00: configured for UDMA/100 Jan 28 01:18:53.319680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:18:53.355075 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:18:53.379206 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:18:53.383978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:18:53.403283 disk-uuid[558]: Primary Header is updated. Jan 28 01:18:53.403283 disk-uuid[558]: Secondary Entries is updated. Jan 28 01:18:53.403283 disk-uuid[558]: Secondary Header is updated. Jan 28 01:18:53.427200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:18:53.444002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:18:53.516399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:18:53.990808 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:18:53.991256 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:18:54.023564 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:18:54.458544 disk-uuid[560]: Warning: The kernel is still using the old partition table. Jan 28 01:18:54.458544 disk-uuid[560]: The new table will be used at the next reboot or after you Jan 28 01:18:54.458544 disk-uuid[560]: run partprobe(8) or kpartx(8) Jan 28 01:18:54.458544 disk-uuid[560]: The operation has completed successfully. Jan 28 01:18:54.800333 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:18:54.807276 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:18:54.873471 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:18:54.945147 sh[595]: Success Jan 28 01:18:55.090589 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 01:18:55.618017 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:18:55.656747 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:18:55.712015 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:18:55.784267 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 01:18:55.784343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:18:55.784360 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:18:55.789630 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:18:55.797628 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:18:55.836740 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:18:55.842571 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:18:55.863115 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:18:55.887720 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:18:55.945586 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:18:55.945652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:18:55.945668 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:18:55.977114 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:18:56.001358 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:18:56.015799 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:18:56.056494 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:18:56.085895 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:18:56.448083 ignition[706]: Ignition 2.19.0 Jan 28 01:18:56.448679 ignition[706]: Stage: fetch-offline Jan 28 01:18:56.448793 ignition[706]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:18:56.448863 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:18:56.449290 ignition[706]: parsed url from cmdline: "" Jan 28 01:18:56.473797 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:18:56.449296 ignition[706]: no config URL provided Jan 28 01:18:56.449305 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:18:56.449318 ignition[706]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:18:56.449365 ignition[706]: op(1): [started] loading QEMU firmware config module Jan 28 01:18:56.449372 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:18:56.506299 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:18:56.492384 ignition[706]: op(1): [finished] loading QEMU firmware config module Jan 28 01:18:56.597729 systemd-networkd[784]: lo: Link UP Jan 28 01:18:56.597757 systemd-networkd[784]: lo: Gained carrier Jan 28 01:18:56.619791 systemd-networkd[784]: Enumeration completed Jan 28 01:18:56.623308 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:18:56.636026 systemd[1]: Reached target network.target - Network. Jan 28 01:18:56.647755 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:18:56.647762 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:18:56.714167 systemd-networkd[784]: eth0: Link UP Jan 28 01:18:56.714196 systemd-networkd[784]: eth0: Gained carrier Jan 28 01:18:56.714263 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:18:56.767927 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:18:56.872892 ignition[706]: parsing config with SHA512: 4f3ac456644414b11f7345f01d0cf9e0b3f863fcf01711f9c7ff014eb557fd557124c9c676061d07864df677eba9a16d2778271e62f8c309a65af1d12b4a214d Jan 28 01:18:56.905570 unknown[706]: fetched base config from "system" Jan 28 01:18:56.905591 unknown[706]: fetched user config from "qemu" Jan 28 01:18:56.914321 ignition[706]: fetch-offline: fetch-offline passed Jan 28 01:18:56.914603 ignition[706]: Ignition finished successfully Jan 28 01:18:56.942890 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:18:56.947309 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:18:56.982977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:18:57.096807 ignition[788]: Ignition 2.19.0 Jan 28 01:18:57.096980 ignition[788]: Stage: kargs Jan 28 01:18:57.097292 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:18:57.097312 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:18:57.101954 ignition[788]: kargs: kargs passed Jan 28 01:18:57.102034 ignition[788]: Ignition finished successfully Jan 28 01:18:57.129873 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:18:57.160056 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:18:57.661691 ignition[796]: Ignition 2.19.0 Jan 28 01:18:57.661708 ignition[796]: Stage: disks Jan 28 01:18:57.662128 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:18:57.662144 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:18:57.683803 ignition[796]: disks: disks passed Jan 28 01:18:57.694418 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:18:57.683916 ignition[796]: Ignition finished successfully Jan 28 01:18:57.709926 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:18:57.721333 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:18:57.761399 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:18:57.797913 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:18:57.798793 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:18:57.830610 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:18:58.013499 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 01:18:58.025111 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:18:58.108218 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:18:58.620194 systemd-networkd[784]: eth0: Gained IPv6LL Jan 28 01:18:58.670602 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 01:18:58.668275 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:18:58.677148 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:18:58.726728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:18:58.741233 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:18:58.742401 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:18:58.775921 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 28 01:18:58.742466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:18:58.742503 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:18:58.831779 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:18:58.831882 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:18:58.831904 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:18:58.845329 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:18:58.890241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:18:58.920099 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:18:58.990151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:18:59.398566 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:18:59.437358 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:18:59.460672 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:18:59.527705 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:18:59.992617 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:19:00.386924 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:19:00.420566 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:19:00.457391 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:19:00.487032 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:19:00.599307 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:19:00.650244 ignition[928]: INFO : Ignition 2.19.0 Jan 28 01:19:00.650244 ignition[928]: INFO : Stage: mount Jan 28 01:19:00.656176 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:19:00.656176 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:19:00.656176 ignition[928]: INFO : mount: mount passed Jan 28 01:19:00.656176 ignition[928]: INFO : Ignition finished successfully Jan 28 01:19:00.660344 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:19:00.696033 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:19:00.714038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:19:00.753892 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 28 01:19:00.775078 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:19:00.775470 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:19:00.775522 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:19:00.803181 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:19:00.810352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:19:00.898286 ignition[958]: INFO : Ignition 2.19.0 Jan 28 01:19:00.898286 ignition[958]: INFO : Stage: files Jan 28 01:19:00.898286 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:19:00.898286 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:19:00.926213 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:19:00.926213 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:19:00.926213 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:19:00.926213 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:19:00.958167 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:19:00.958167 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:19:00.958167 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:19:00.958167 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:19:00.958167 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:19:00.958167 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 01:19:00.928071 unknown[958]: wrote ssh authorized keys file for user: core Jan 28 01:19:01.066311 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 01:19:01.914752 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:19:01.936791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 01:19:01.953672 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 28 01:19:02.137004 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 28 01:19:03.856444 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 01:19:03.865478 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:19:03.881176 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:19:03.881176 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:19:03.896172 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:19:03.896172 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:19:03.911687 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:19:03.911687 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:19:03.911687 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:19:03.936025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:19:03.936025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:19:03.936025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:19:03.936025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:19:03.936025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:19:03.936025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:19:04.186118 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 28 01:19:07.742648 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:19:07.742648 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 28 01:19:07.759453 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 28 01:19:07.775948 ignition[958]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:19:07.941984 ignition[958]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:19:07.953934 ignition[958]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:19:07.960716 ignition[958]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:19:07.960716 ignition[958]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:19:07.960716 ignition[958]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:19:07.960716 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:19:07.960716 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:19:07.960716 ignition[958]: INFO : files: files passed Jan 28 01:19:07.960716 ignition[958]: INFO : Ignition finished successfully Jan 28 01:19:08.032614 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:19:08.056096 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:19:08.071737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:19:08.077945 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:19:08.079226 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:19:08.532640 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:19:08.551640 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:19:08.551640 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:19:08.594651 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:19:08.608264 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:19:08.617383 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:19:08.643401 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:19:08.814341 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:19:08.814996 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:19:08.839809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:19:08.854243 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:19:08.864486 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:19:08.910088 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:19:08.980603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:19:09.007700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:19:09.051115 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:19:09.058884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:19:09.078694 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:19:09.097177 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:19:09.104270 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:19:09.122330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:19:09.137466 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:19:09.141074 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:19:09.152319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:19:09.176365 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:19:09.177700 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:19:09.177920 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:19:09.178109 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:19:09.178273 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:19:09.178403 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:19:09.178497 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:19:09.178735 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:19:09.179170 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:19:09.179457 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:19:09.179599 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:19:09.186484 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:19:09.188259 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:19:09.194300 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:19:09.196042 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:19:09.196236 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:19:09.196997 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:19:09.197495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:19:09.198694 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:19:09.200508 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:19:09.202417 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:19:09.204712 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:19:09.204892 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:19:09.209250 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:19:09.209428 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:19:09.210021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:19:09.210179 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:19:09.213453 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:19:09.213694 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:19:09.296431 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:19:09.506103 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:19:09.524012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:19:09.524306 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:19:09.532752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:19:09.532990 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:19:09.589143 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:19:09.589326 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:19:09.627046 ignition[1013]: INFO : Ignition 2.19.0 Jan 28 01:19:09.627046 ignition[1013]: INFO : Stage: umount Jan 28 01:19:09.627046 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:19:09.627046 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:19:09.627046 ignition[1013]: INFO : umount: umount passed Jan 28 01:19:09.627046 ignition[1013]: INFO : Ignition finished successfully Jan 28 01:19:09.634929 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:19:09.635160 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:19:09.648636 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:19:09.655428 systemd[1]: Stopped target network.target - Network. Jan 28 01:19:09.678692 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:19:09.678866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:19:09.688489 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:19:09.688622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:19:09.688740 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:19:09.688808 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:19:09.688989 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:19:09.689065 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:19:09.691725 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:19:09.701700 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:19:09.703421 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:19:09.703604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:19:09.704534 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:19:09.704699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:19:09.738755 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:19:09.743090 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:19:09.753643 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:19:09.753769 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:19:09.767184 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 28 01:19:09.934027 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:19:09.940375 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:19:09.967214 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:19:09.971323 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:19:10.013406 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:19:10.025678 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:19:10.025794 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:19:10.038358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:19:10.038453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:19:10.093018 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:19:10.094134 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:19:10.121441 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:19:10.171454 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:19:10.178317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:19:10.209233 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:19:10.209450 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:19:10.247062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:19:10.247174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:19:10.256657 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:19:10.256725 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:19:10.266391 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:19:10.266480 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:19:10.309320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:19:10.309409 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:19:10.334973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:19:10.335108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:19:10.359450 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:19:10.379490 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:19:10.383112 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:19:10.389287 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:19:10.389388 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:19:10.399914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:19:10.399999 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:19:10.410677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:19:10.410768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:19:10.414718 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:19:10.414928 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:19:10.450960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:19:10.497714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:19:10.577438 systemd[1]: Switching root. Jan 28 01:19:10.656544 systemd-journald[195]: Journal stopped Jan 28 01:19:15.890760 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 28 01:19:15.890931 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:19:15.890958 kernel: SELinux: policy capability open_perms=1 Jan 28 01:19:15.890975 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:19:15.890993 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:19:15.891012 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:19:15.891030 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:19:15.891047 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:19:15.891063 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:19:15.891084 kernel: audit: type=1403 audit(1769563151.831:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:19:15.892488 systemd[1]: Successfully loaded SELinux policy in 228.974ms. Jan 28 01:19:15.892538 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.262ms. Jan 28 01:19:15.892591 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:19:15.892615 systemd[1]: Detected virtualization kvm. Jan 28 01:19:15.892633 systemd[1]: Detected architecture x86-64. Jan 28 01:19:15.892648 systemd[1]: Detected first boot. Jan 28 01:19:15.892663 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:19:15.892684 zram_generator::config[1075]: No configuration found. Jan 28 01:19:15.892709 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:19:15.892725 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:19:15.892741 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:19:15.892762 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:19:15.892871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:19:15.892889 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:19:15.892905 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:19:15.892925 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:19:15.892950 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:19:15.892967 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:19:15.892983 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:19:15.892998 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:19:15.893018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:19:15.893037 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:19:15.893054 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:19:15.893070 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:19:15.893086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:19:15.893109 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:19:15.893128 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:19:15.893146 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:19:15.893161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:19:15.893177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:19:15.893194 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:19:15.893212 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:19:15.893231 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:19:15.893298 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:19:15.893320 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:19:15.893341 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:19:15.893360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:19:15.893379 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:19:15.893395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:19:15.893410 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:19:15.893425 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:19:15.893445 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:19:15.893469 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:19:15.893487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:19:15.893503 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:19:15.893518 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:19:15.893535 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:19:15.893553 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:19:15.893606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:19:15.893623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:19:15.893639 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:19:15.893663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:19:15.893682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:19:15.893700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:19:15.893751 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:19:15.893770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:19:15.893791 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:19:15.893807 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 28 01:19:15.893874 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 28 01:19:15.893900 kernel: fuse: init (API version 7.39) Jan 28 01:19:15.893915 kernel: loop: module loaded Jan 28 01:19:15.893931 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:19:15.893946 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:19:15.893965 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:19:15.893985 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:19:15.894002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:19:15.894017 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:19:15.894033 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:19:15.894055 kernel: ACPI: bus type drm_connector registered Jan 28 01:19:15.894072 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:19:15.894090 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:19:15.894136 systemd-journald[1174]: Collecting audit messages is disabled. Jan 28 01:19:15.894178 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:19:15.894196 systemd-journald[1174]: Journal started Jan 28 01:19:15.894227 systemd-journald[1174]: Runtime Journal (/run/log/journal/9fa51fe2f41c4cb89a88f9c5674590aa) is 6.0M, max 48.4M, 42.3M free. Jan 28 01:19:15.914502 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:19:15.919757 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:19:15.924078 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:19:15.928259 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:19:15.933121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:19:15.939782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:19:15.941252 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:19:15.952296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:19:15.952548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:19:15.959452 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:19:15.959868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:19:15.986218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:19:15.986606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:19:15.993772 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:19:15.994198 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:19:16.003248 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:19:16.003643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:19:16.009446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:19:16.023324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:19:16.036958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:19:16.097328 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:19:16.136438 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:19:16.211247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:19:16.298923 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:19:16.318532 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:19:16.341043 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:19:16.371374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:19:16.388927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:19:16.391301 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:19:16.404075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:19:16.414043 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:19:16.479091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:19:16.515160 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:19:16.537452 systemd-journald[1174]: Time spent on flushing to /var/log/journal/9fa51fe2f41c4cb89a88f9c5674590aa is 40.863ms for 941 entries. Jan 28 01:19:16.537452 systemd-journald[1174]: System Journal (/var/log/journal/9fa51fe2f41c4cb89a88f9c5674590aa) is 8.0M, max 195.6M, 187.6M free. Jan 28 01:19:16.656288 systemd-journald[1174]: Received client request to flush runtime journal. Jan 28 01:19:16.543951 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:19:16.555483 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:19:16.579337 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:19:16.588868 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:19:16.639658 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 28 01:19:16.649398 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 28 01:19:16.649421 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 28 01:19:16.660052 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:19:16.702307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:19:16.709969 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:19:16.730732 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:19:16.895429 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:19:17.290253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:19:17.396433 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 28 01:19:17.396473 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 28 01:19:17.415547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:19:19.194955 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:19:19.333096 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:19:19.404198 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 28 01:19:19.516960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:19:19.553280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:19:19.616124 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:19:19.689354 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 28 01:19:20.525701 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1249) Jan 28 01:19:20.613381 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:19:20.632068 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 01:19:20.659867 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:19:20.717423 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:19:20.718020 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 01:19:20.718375 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:19:20.773323 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:19:21.036176 systemd-networkd[1246]: lo: Link UP Jan 28 01:19:21.036799 systemd-networkd[1246]: lo: Gained carrier Jan 28 01:19:21.040636 systemd-networkd[1246]: Enumeration completed Jan 28 01:19:21.041035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:19:21.041930 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:19:21.041936 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:19:21.047876 systemd-networkd[1246]: eth0: Link UP Jan 28 01:19:21.047883 systemd-networkd[1246]: eth0: Gained carrier Jan 28 01:19:21.047904 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:19:21.070097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:19:21.119943 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:19:21.425304 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:19:21.429084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:19:21.458130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:19:22.162153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:19:22.560352 kernel: kvm_amd: TSC scaling supported Jan 28 01:19:22.560470 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:19:22.560499 kernel: kvm_amd: Nested Paging enabled Jan 28 01:19:22.566094 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:19:22.572289 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:19:22.960083 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 28 01:19:23.018649 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:19:23.058920 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:19:23.140928 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:19:23.182252 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:19:23.246684 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:19:23.312483 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:19:23.323253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:19:23.342258 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:19:23.357106 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:19:23.441243 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:19:23.452096 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:19:23.456964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:19:23.457020 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:19:23.461159 systemd[1]: Reached target machines.target - Containers. Jan 28 01:19:23.475439 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:19:23.512318 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:19:23.531542 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:19:23.536798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:19:23.541001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:19:23.555213 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:19:23.581410 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:19:23.593896 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:19:23.629887 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:19:23.678520 kernel: loop0: detected capacity change from 0 to 142488 Jan 28 01:19:23.694940 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:19:23.704103 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:19:23.759891 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:19:23.806892 kernel: loop1: detected capacity change from 0 to 140768 Jan 28 01:19:24.822023 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 01:19:25.108568 kernel: loop3: detected capacity change from 0 to 142488 Jan 28 01:19:25.351163 kernel: loop4: detected capacity change from 0 to 140768 Jan 28 01:19:25.516521 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 01:19:25.593855 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 01:19:25.595024 (sd-merge)[1313]: Merged extensions into '/usr'. Jan 28 01:19:25.619398 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:19:25.619424 systemd[1]: Reloading... Jan 28 01:19:26.153943 zram_generator::config[1340]: No configuration found. Jan 28 01:19:26.819962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:19:27.005709 systemd[1]: Reloading finished in 1385 ms. Jan 28 01:19:28.041388 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:19:28.088075 systemd[1]: Starting ensure-sysext.service... Jan 28 01:19:28.099924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:19:28.188245 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:19:28.188290 systemd[1]: Reloading... Jan 28 01:19:28.205498 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:19:28.502475 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:19:28.512283 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:19:28.514175 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:19:28.526538 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 28 01:19:28.527938 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 28 01:19:28.555706 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:19:28.555745 systemd-tmpfiles[1383]: Skipping /boot Jan 28 01:19:28.712480 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:19:28.712524 systemd-tmpfiles[1383]: Skipping /boot Jan 28 01:19:28.723858 zram_generator::config[1413]: No configuration found. Jan 28 01:19:29.442889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:19:29.760088 systemd[1]: Reloading finished in 1568 ms. Jan 28 01:19:29.933914 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:19:30.029244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:19:30.095378 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:19:30.108044 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:19:30.121720 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:19:30.145175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:19:30.161378 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:19:30.227939 systemd[1]: Finished ensure-sysext.service. Jan 28 01:19:30.235537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:19:30.237498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:19:30.255025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:19:30.301266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:19:30.317059 augenrules[1487]: No rules Jan 28 01:19:30.324141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:19:30.337707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:19:30.345798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:19:30.355064 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:19:30.364416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:19:30.369348 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:19:30.391633 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:19:30.398506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:19:30.398911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:19:30.408898 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:19:30.417495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:19:30.417907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:19:30.428674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:19:30.429974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:19:30.445336 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:19:30.445790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:19:30.460790 systemd-resolved[1471]: Positive Trust Anchors: Jan 28 01:19:30.460808 systemd-resolved[1471]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:19:30.460902 systemd-resolved[1471]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:19:30.474766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:19:30.478185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:19:30.491135 systemd-resolved[1471]: Defaulting to hostname 'linux'. Jan 28 01:19:30.492371 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:19:30.500185 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:19:30.504212 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:19:30.511105 systemd[1]: Reached target network.target - Network. Jan 28 01:19:30.514299 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:19:30.517969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:19:30.526113 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:19:30.588946 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:19:31.631569 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:19:31.635607 systemd-resolved[1471]: Clock change detected. Flushing caches. Jan 28 01:19:31.643732 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:19:31.643807 systemd-timesyncd[1497]: Initial clock synchronization to Wed 2026-01-28 01:19:31.631357 UTC. Jan 28 01:19:31.645288 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:19:31.651525 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:19:31.665753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:19:31.674426 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:19:31.681906 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:19:31.682007 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:19:31.685778 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:19:31.690222 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:19:31.694012 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:19:31.697513 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:19:31.705929 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:19:31.715721 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:19:31.721937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:19:31.735997 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:19:31.750938 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:19:31.754757 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:19:31.761045 systemd[1]: System is tainted: cgroupsv1 Jan 28 01:19:31.761131 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:19:31.761171 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:19:31.764480 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:19:31.770967 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:19:31.778873 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:19:31.785919 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:19:31.796424 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:19:31.800106 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:19:31.806658 jq[1520]: false Jan 28 01:19:31.823796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:31.837409 dbus-daemon[1518]: [system] SELinux support is enabled Jan 28 01:19:31.838864 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:19:31.869671 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:19:31.876448 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:19:31.881118 extend-filesystems[1521]: Found loop3 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found loop4 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found loop5 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found sr0 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda1 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda2 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda3 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found usr Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda4 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda6 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda7 Jan 28 01:19:31.884035 extend-filesystems[1521]: Found vda9 Jan 28 01:19:31.884035 extend-filesystems[1521]: Checking size of /dev/vda9 Jan 28 01:19:31.889443 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:19:31.901782 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:19:31.915551 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:19:31.928428 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:19:31.933474 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:19:31.938733 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:19:31.972562 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:19:31.980663 jq[1545]: true Jan 28 01:19:31.990836 extend-filesystems[1521]: Resized partition /dev/vda9 Jan 28 01:19:31.997189 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:19:31.997706 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:19:32.005103 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:19:32.005691 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:19:32.016711 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Jan 28 01:19:32.027065 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 01:19:32.034906 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:19:32.035344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:19:32.039743 update_engine[1542]: I20260128 01:19:32.039647 1542 main.cc:92] Flatcar Update Engine starting Jan 28 01:19:32.075894 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:19:32.075939 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:19:32.083044 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:19:32.083088 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:19:32.089502 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:19:32.092404 jq[1562]: true Jan 28 01:19:32.097522 update_engine[1542]: I20260128 01:19:32.095472 1542 update_check_scheduler.cc:74] Next update check in 10m56s Jan 28 01:19:32.106621 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:19:32.121260 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:19:32.125308 tar[1558]: linux-amd64/LICENSE Jan 28 01:19:32.131291 tar[1558]: linux-amd64/helm Jan 28 01:19:32.154643 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 01:19:32.154725 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1582) Jan 28 01:19:32.136103 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:19:32.139540 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:19:32.185007 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:19:32.185007 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:19:32.185007 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 01:19:32.173889 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:19:32.232924 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Jan 28 01:19:32.174351 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:19:32.270430 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:19:32.187772 systemd-logind[1539]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 01:19:32.187815 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:19:32.195230 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:19:32.198829 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:19:32.199345 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:19:32.236295 systemd-logind[1539]: New seat seat0. Jan 28 01:19:32.272850 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:19:32.278519 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:19:32.333942 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:19:32.717522 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:19:33.099772 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:19:33.248620 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:19:33.320167 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:19:33.339178 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:19:33.368553 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:53948.service - OpenSSH per-connection server daemon (10.0.0.1:53948). Jan 28 01:19:33.385879 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:19:33.388915 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:19:33.412217 containerd[1563]: time="2026-01-28T01:19:33.408425329Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:19:33.416960 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:19:33.579615 containerd[1563]: time="2026-01-28T01:19:33.578851701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.584048 containerd[1563]: time="2026-01-28T01:19:33.583919575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:19:33.584048 containerd[1563]: time="2026-01-28T01:19:33.583969277Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:19:33.584185 containerd[1563]: time="2026-01-28T01:19:33.584055549Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:19:33.584406 containerd[1563]: time="2026-01-28T01:19:33.584309543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:19:33.584406 containerd[1563]: time="2026-01-28T01:19:33.584358875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.584552 containerd[1563]: time="2026-01-28T01:19:33.584510749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:19:33.584552 containerd[1563]: time="2026-01-28T01:19:33.584534493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.585017 containerd[1563]: time="2026-01-28T01:19:33.584941113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:19:33.585017 containerd[1563]: time="2026-01-28T01:19:33.584990485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.585017 containerd[1563]: time="2026-01-28T01:19:33.585013959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:19:33.585120 containerd[1563]: time="2026-01-28T01:19:33.585041170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.585512 containerd[1563]: time="2026-01-28T01:19:33.585184247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.585857 containerd[1563]: time="2026-01-28T01:19:33.585684681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:19:33.586738 containerd[1563]: time="2026-01-28T01:19:33.585964203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:19:33.586738 containerd[1563]: time="2026-01-28T01:19:33.586012233Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:19:33.586738 containerd[1563]: time="2026-01-28T01:19:33.586131295Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:19:33.586738 containerd[1563]: time="2026-01-28T01:19:33.586201686Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:19:33.599111 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:19:33.605899 containerd[1563]: time="2026-01-28T01:19:33.605747731Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:19:33.608001 containerd[1563]: time="2026-01-28T01:19:33.607807997Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:19:33.608001 containerd[1563]: time="2026-01-28T01:19:33.607842181Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:19:33.608109 containerd[1563]: time="2026-01-28T01:19:33.608030433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:19:33.608109 containerd[1563]: time="2026-01-28T01:19:33.608051322Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:19:33.608336 containerd[1563]: time="2026-01-28T01:19:33.608265812Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:19:33.609783 containerd[1563]: time="2026-01-28T01:19:33.609734455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.609919621Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.609944267Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.609960497Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.609977830Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.609994070Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.610010941Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.610058369Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.610078727Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:19:33.658163 containerd[1563]: time="2026-01-28T01:19:33.610095068Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:33.610111359Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.099418468Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.099746601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.099877335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.099942657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.100046741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.100148281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.100226457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.100308019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.101712 containerd[1563]: time="2026-01-28T01:19:34.100408096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.100144 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:19:34.103213 containerd[1563]: time="2026-01-28T01:19:34.103183448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.103304 containerd[1563]: time="2026-01-28T01:19:34.103284938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.103418 containerd[1563]: time="2026-01-28T01:19:34.103364066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.103496 containerd[1563]: time="2026-01-28T01:19:34.103475353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.103571 containerd[1563]: time="2026-01-28T01:19:34.103551967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.103812 containerd[1563]: time="2026-01-28T01:19:34.103785653Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:19:34.103944 containerd[1563]: time="2026-01-28T01:19:34.103920285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.104023 containerd[1563]: time="2026-01-28T01:19:34.104003911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.104095 containerd[1563]: time="2026-01-28T01:19:34.104075404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:19:34.104298 containerd[1563]: time="2026-01-28T01:19:34.104241805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:19:34.104565 containerd[1563]: time="2026-01-28T01:19:34.104531947Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:19:34.104691 containerd[1563]: time="2026-01-28T01:19:34.104672690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:19:34.104781 containerd[1563]: time="2026-01-28T01:19:34.104760123Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:19:34.104850 containerd[1563]: time="2026-01-28T01:19:34.104833930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.104952 containerd[1563]: time="2026-01-28T01:19:34.104930951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:19:34.105240 containerd[1563]: time="2026-01-28T01:19:34.105217958Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:19:34.109725 containerd[1563]: time="2026-01-28T01:19:34.109697813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:19:34.111297 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:19:34.112001 containerd[1563]: time="2026-01-28T01:19:34.111908581Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:19:34.112332 containerd[1563]: time="2026-01-28T01:19:34.112309791Z" level=info msg="Connect containerd service" Jan 28 01:19:34.112509 containerd[1563]: time="2026-01-28T01:19:34.112484838Z" level=info msg="using legacy CRI server" Jan 28 01:19:34.112686 containerd[1563]: time="2026-01-28T01:19:34.112560559Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:19:34.113128 containerd[1563]: time="2026-01-28T01:19:34.113101148Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:19:34.114642 containerd[1563]: time="2026-01-28T01:19:34.114566675Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:19:34.119414 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:19:34.129883 containerd[1563]: time="2026-01-28T01:19:34.129833693Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:19:34.130220 containerd[1563]: time="2026-01-28T01:19:34.130195309Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:19:34.132859 containerd[1563]: time="2026-01-28T01:19:34.131098155Z" level=info msg="Start subscribing containerd event" Jan 28 01:19:34.135669 containerd[1563]: time="2026-01-28T01:19:34.135641579Z" level=info msg="Start recovering state" Jan 28 01:19:34.135938 containerd[1563]: time="2026-01-28T01:19:34.135915570Z" level=info msg="Start event monitor" Jan 28 01:19:34.136037 containerd[1563]: time="2026-01-28T01:19:34.136013744Z" level=info msg="Start snapshots syncer" Jan 28 01:19:34.136111 containerd[1563]: time="2026-01-28T01:19:34.136091359Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:19:34.136178 containerd[1563]: time="2026-01-28T01:19:34.136161641Z" level=info msg="Start streaming server" Jan 28 01:19:34.136409 containerd[1563]: time="2026-01-28T01:19:34.136352857Z" level=info msg="containerd successfully booted in 0.733201s" Jan 28 01:19:34.136916 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:19:34.485304 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 53948 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:34.498757 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:34.531830 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:19:34.565548 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:19:34.577444 systemd-logind[1539]: New session 1 of user core. Jan 28 01:19:35.028178 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:19:35.620039 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:19:35.658526 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:19:36.018261 tar[1558]: linux-amd64/README.md Jan 28 01:19:36.164083 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:19:36.369493 systemd[1655]: Queued start job for default target default.target. Jan 28 01:19:36.370227 systemd[1655]: Created slice app.slice - User Application Slice. Jan 28 01:19:36.370255 systemd[1655]: Reached target paths.target - Paths. Jan 28 01:19:36.370276 systemd[1655]: Reached target timers.target - Timers. Jan 28 01:19:36.381026 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:19:36.400836 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:19:36.400922 systemd[1655]: Reached target sockets.target - Sockets. Jan 28 01:19:36.400942 systemd[1655]: Reached target basic.target - Basic System. Jan 28 01:19:36.400997 systemd[1655]: Reached target default.target - Main User Target. Jan 28 01:19:36.401046 systemd[1655]: Startup finished in 724ms. Jan 28 01:19:36.401772 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:19:36.427361 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:19:36.677081 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:53952.service - OpenSSH per-connection server daemon (10.0.0.1:53952). Jan 28 01:19:37.218853 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 53952 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:37.466798 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:37.500290 systemd-logind[1539]: New session 2 of user core. Jan 28 01:19:37.507321 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:19:37.679157 sshd[1672]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:37.709816 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). Jan 28 01:19:37.836510 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:53952.service: Deactivated successfully. Jan 28 01:19:37.855266 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 01:19:37.859957 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Jan 28 01:19:37.865414 systemd-logind[1539]: Removed session 2. Jan 28 01:19:38.178451 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:38.178492 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:38.202869 systemd-logind[1539]: New session 3 of user core. Jan 28 01:19:38.214160 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:19:38.640144 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:38.679990 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:53964.service: Deactivated successfully. Jan 28 01:19:38.685024 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:19:38.686672 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:19:38.688184 systemd-logind[1539]: Removed session 3. Jan 28 01:19:40.701057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:40.703667 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:19:40.704013 systemd[1]: Startup finished in 27.139s (kernel) + 28.112s (userspace) = 55.252s. Jan 28 01:19:40.735641 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:19:47.786207 kubelet[1700]: E0128 01:19:47.779306 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:19:47.795089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:19:47.795492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:19:48.754063 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:43000.service - OpenSSH per-connection server daemon (10.0.0.1:43000). Jan 28 01:19:49.101939 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 43000 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:49.108173 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:49.125707 systemd-logind[1539]: New session 4 of user core. Jan 28 01:19:49.139066 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:19:49.337076 sshd[1709]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:49.351220 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:43000.service: Deactivated successfully. Jan 28 01:19:49.369640 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:19:49.390429 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:43002.service - OpenSSH per-connection server daemon (10.0.0.1:43002). Jan 28 01:19:49.391127 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:19:49.394909 systemd-logind[1539]: Removed session 4. Jan 28 01:19:49.561189 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:49.574037 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:49.650060 systemd-logind[1539]: New session 5 of user core. Jan 28 01:19:49.667114 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:19:49.782848 sshd[1717]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:49.796524 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Jan 28 01:19:49.809731 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:43002.service: Deactivated successfully. Jan 28 01:19:49.814503 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:19:49.816362 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:19:49.836146 systemd-logind[1539]: Removed session 5. Jan 28 01:19:49.923929 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:49.934010 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:49.992091 systemd-logind[1539]: New session 6 of user core. Jan 28 01:19:50.001235 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:19:50.264482 sshd[1722]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:50.295651 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:43016.service - OpenSSH per-connection server daemon (10.0.0.1:43016). Jan 28 01:19:50.298498 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:43004.service: Deactivated successfully. Jan 28 01:19:50.314515 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:19:50.318967 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:19:50.324873 systemd-logind[1539]: Removed session 6. Jan 28 01:19:50.415509 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 43016 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:50.416952 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:50.443097 systemd-logind[1539]: New session 7 of user core. Jan 28 01:19:50.454106 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:19:50.754137 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:19:50.757488 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:50.792986 sudo[1737]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:50.801029 sshd[1730]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:50.824619 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). Jan 28 01:19:50.825468 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:43016.service: Deactivated successfully. Jan 28 01:19:50.836682 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:19:50.837908 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:19:50.851896 systemd-logind[1539]: Removed session 7. Jan 28 01:19:50.912672 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:50.920007 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:50.934735 systemd-logind[1539]: New session 8 of user core. Jan 28 01:19:50.951075 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:19:52.224649 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:19:52.225276 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:52.241886 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:52.277178 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 01:19:52.278535 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:52.339995 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 01:19:52.384027 auditctl[1750]: No rules Jan 28 01:19:52.389175 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:19:52.389729 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 01:19:52.410126 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:19:52.617897 augenrules[1769]: No rules Jan 28 01:19:52.627931 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:19:52.639189 sudo[1746]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:52.658101 sshd[1739]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:52.668249 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:43032.service: Deactivated successfully. Jan 28 01:19:52.676730 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:19:52.677000 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:19:52.726194 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:60238.service - OpenSSH per-connection server daemon (10.0.0.1:60238). Jan 28 01:19:52.816812 systemd-logind[1539]: Removed session 8. Jan 28 01:19:52.972553 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 60238 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:19:52.977887 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:53.008712 systemd-logind[1539]: New session 9 of user core. Jan 28 01:19:53.021529 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:19:53.103814 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:19:53.105316 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:19:54.524376 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:19:54.530464 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:19:57.961263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:19:57.986921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:00.774973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:00.785634 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:01.133320 dockerd[1800]: time="2026-01-28T01:20:01.130206410Z" level=info msg="Starting up" Jan 28 01:20:01.225135 kubelet[1822]: E0128 01:20:01.224459 1822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:01.235788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:01.236199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:02.834259 dockerd[1800]: time="2026-01-28T01:20:02.833963139Z" level=info msg="Loading containers: start." Jan 28 01:20:04.087523 kernel: Initializing XFRM netlink socket Jan 28 01:20:04.685019 systemd-networkd[1246]: docker0: Link UP Jan 28 01:20:04.871677 dockerd[1800]: time="2026-01-28T01:20:04.868905604Z" level=info msg="Loading containers: done." Jan 28 01:20:04.984893 dockerd[1800]: time="2026-01-28T01:20:04.984768890Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:20:04.985163 dockerd[1800]: time="2026-01-28T01:20:04.985049292Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:20:04.985436 dockerd[1800]: time="2026-01-28T01:20:04.985306399Z" level=info msg="Daemon has completed initialization" Jan 28 01:20:05.605969 dockerd[1800]: time="2026-01-28T01:20:05.600519324Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:20:05.608464 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:20:09.686536 containerd[1563]: time="2026-01-28T01:20:09.685282280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:20:11.463032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:20:11.495144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:13.121933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944931467.mount: Deactivated successfully. Jan 28 01:20:13.727670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:13.796885 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:15.088243 kubelet[1996]: E0128 01:20:15.086261 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:15.101380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:15.102346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:17.499048 update_engine[1542]: I20260128 01:20:17.478394 1542 update_attempter.cc:509] Updating boot flags... Jan 28 01:20:17.699667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2058) Jan 28 01:20:18.094750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2061) Jan 28 01:20:22.528319 containerd[1563]: time="2026-01-28T01:20:22.527885605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:22.530686 containerd[1563]: time="2026-01-28T01:20:22.530538313Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 01:20:22.532477 containerd[1563]: time="2026-01-28T01:20:22.532355327Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:22.539543 containerd[1563]: time="2026-01-28T01:20:22.539442990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:22.541931 containerd[1563]: time="2026-01-28T01:20:22.541820808Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 12.856401329s" Jan 28 01:20:22.541931 containerd[1563]: time="2026-01-28T01:20:22.541920490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 01:20:22.546438 containerd[1563]: time="2026-01-28T01:20:22.546407153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:20:24.667560 containerd[1563]: time="2026-01-28T01:20:24.667111822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:24.669306 containerd[1563]: time="2026-01-28T01:20:24.668840132Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 01:20:24.672233 containerd[1563]: time="2026-01-28T01:20:24.672168374Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:24.680194 containerd[1563]: time="2026-01-28T01:20:24.679676482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:24.681564 containerd[1563]: time="2026-01-28T01:20:24.681359770Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.134789943s" Jan 28 01:20:24.681564 containerd[1563]: time="2026-01-28T01:20:24.681425288Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 01:20:24.684513 containerd[1563]: time="2026-01-28T01:20:24.684415758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:20:25.201658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:20:25.208813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:25.456076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:25.465507 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:25.541027 kubelet[2080]: E0128 01:20:25.540934 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:25.544755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:25.545102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:25.994790 containerd[1563]: time="2026-01-28T01:20:25.994673324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:25.995993 containerd[1563]: time="2026-01-28T01:20:25.995929055Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 01:20:25.998082 containerd[1563]: time="2026-01-28T01:20:25.997996053Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:26.002303 containerd[1563]: time="2026-01-28T01:20:26.002152606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:26.003899 containerd[1563]: time="2026-01-28T01:20:26.003775941Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.319292932s" Jan 28 01:20:26.003899 containerd[1563]: time="2026-01-28T01:20:26.003831714Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 01:20:26.004805 containerd[1563]: time="2026-01-28T01:20:26.004780052Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:20:27.575670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754878519.mount: Deactivated successfully. Jan 28 01:20:28.409283 containerd[1563]: time="2026-01-28T01:20:28.409092989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:28.410716 containerd[1563]: time="2026-01-28T01:20:28.410643826Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 01:20:28.412051 containerd[1563]: time="2026-01-28T01:20:28.412001212Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:28.416013 containerd[1563]: time="2026-01-28T01:20:28.415947977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:28.417043 containerd[1563]: time="2026-01-28T01:20:28.416566993Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.411643189s" Jan 28 01:20:28.417043 containerd[1563]: time="2026-01-28T01:20:28.416671079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 01:20:28.418820 containerd[1563]: time="2026-01-28T01:20:28.418673942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:20:28.926933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125645005.mount: Deactivated successfully. Jan 28 01:20:31.257380 containerd[1563]: time="2026-01-28T01:20:31.255497346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:31.260647 containerd[1563]: time="2026-01-28T01:20:31.260427351Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 01:20:31.263481 containerd[1563]: time="2026-01-28T01:20:31.263364799Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:31.269172 containerd[1563]: time="2026-01-28T01:20:31.268844755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:31.270810 containerd[1563]: time="2026-01-28T01:20:31.270708097Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.851992111s" Jan 28 01:20:31.270810 containerd[1563]: time="2026-01-28T01:20:31.270792963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 01:20:31.271680 containerd[1563]: time="2026-01-28T01:20:31.271655232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:20:31.820194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393209055.mount: Deactivated successfully. Jan 28 01:20:31.844350 containerd[1563]: time="2026-01-28T01:20:31.843973576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:31.847382 containerd[1563]: time="2026-01-28T01:20:31.847155618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 01:20:31.849159 containerd[1563]: time="2026-01-28T01:20:31.849093768Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:31.856618 containerd[1563]: time="2026-01-28T01:20:31.852564439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:31.856618 containerd[1563]: time="2026-01-28T01:20:31.853894762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.131964ms" Jan 28 01:20:31.856618 containerd[1563]: time="2026-01-28T01:20:31.853961688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:20:31.858570 containerd[1563]: time="2026-01-28T01:20:31.857240422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:20:33.175234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553532966.mount: Deactivated successfully. Jan 28 01:20:35.759219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:20:35.872012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:36.959682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:36.977971 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:37.270546 kubelet[2186]: E0128 01:20:37.268943 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:37.276423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:37.276837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:47.459415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:20:47.499129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:48.813314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:48.816625 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:20:49.111569 containerd[1563]: time="2026-01-28T01:20:49.108424380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:49.118451 containerd[1563]: time="2026-01-28T01:20:49.112375509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 01:20:49.118451 containerd[1563]: time="2026-01-28T01:20:49.116130075Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:49.164546 containerd[1563]: time="2026-01-28T01:20:49.163263525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:49.202254 containerd[1563]: time="2026-01-28T01:20:49.195452152Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 17.336904586s" Jan 28 01:20:49.202254 containerd[1563]: time="2026-01-28T01:20:49.199842996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:20:49.411009 kubelet[2247]: E0128 01:20:49.409227 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:20:49.422986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:20:49.423401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:20:57.531441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:57.549493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:57.623198 systemd[1]: Reloading requested from client PID 2288 ('systemctl') (unit session-9.scope)... Jan 28 01:20:57.623218 systemd[1]: Reloading... Jan 28 01:20:57.807678 zram_generator::config[2330]: No configuration found. Jan 28 01:20:58.070164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:20:58.187952 systemd[1]: Reloading finished in 564 ms. Jan 28 01:20:58.303783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:58.310407 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:20:58.312451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:58.316189 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:20:58.316981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:58.344099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:20:58.660787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:20:58.686343 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:20:58.816376 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:20:58.816376 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:20:58.816376 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:20:58.817138 kubelet[2391]: I0128 01:20:58.816521 2391 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:21:00.010285 kubelet[2391]: I0128 01:21:00.009323 2391 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:21:00.010285 kubelet[2391]: I0128 01:21:00.009365 2391 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:21:00.010285 kubelet[2391]: I0128 01:21:00.009772 2391 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:21:00.180696 kubelet[2391]: E0128 01:21:00.180472 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:00.182893 kubelet[2391]: I0128 01:21:00.182804 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:21:00.220394 kubelet[2391]: E0128 01:21:00.215619 2391 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:21:00.220394 kubelet[2391]: I0128 01:21:00.215694 2391 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:21:00.227676 kubelet[2391]: I0128 01:21:00.227316 2391 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:21:00.228567 kubelet[2391]: I0128 01:21:00.228481 2391 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:21:00.228905 kubelet[2391]: I0128 01:21:00.228526 2391 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:21:00.229218 kubelet[2391]: I0128 01:21:00.229118 2391 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:21:00.229218 kubelet[2391]: I0128 01:21:00.229142 2391 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:21:00.229758 kubelet[2391]: I0128 01:21:00.229637 2391 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:21:00.240322 kubelet[2391]: I0128 01:21:00.236266 2391 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:21:00.240322 kubelet[2391]: I0128 01:21:00.236333 2391 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:21:00.240322 kubelet[2391]: I0128 01:21:00.236471 2391 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:21:00.240322 kubelet[2391]: I0128 01:21:00.236514 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:21:00.246855 kubelet[2391]: W0128 01:21:00.246790 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:00.247051 kubelet[2391]: E0128 01:21:00.247000 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:00.249128 kubelet[2391]: I0128 01:21:00.249103 2391 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:21:00.251372 kubelet[2391]: W0128 01:21:00.249665 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:00.251372 kubelet[2391]: E0128 01:21:00.249758 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:00.253975 kubelet[2391]: I0128 01:21:00.252393 2391 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:21:00.253975 kubelet[2391]: W0128 01:21:00.253372 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:21:00.264689 kubelet[2391]: I0128 01:21:00.261731 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:21:00.264689 kubelet[2391]: I0128 01:21:00.261861 2391 server.go:1287] "Started kubelet" Jan 28 01:21:00.270252 kubelet[2391]: I0128 01:21:00.269513 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:21:00.275222 kubelet[2391]: I0128 01:21:00.272929 2391 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:21:00.276635 kubelet[2391]: I0128 01:21:00.275456 2391 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:21:00.280496 kubelet[2391]: I0128 01:21:00.277253 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:21:00.280496 kubelet[2391]: I0128 01:21:00.277681 2391 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:21:00.280496 kubelet[2391]: I0128 01:21:00.278007 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:21:00.280819 kubelet[2391]: I0128 01:21:00.280714 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:21:00.280934 kubelet[2391]: I0128 01:21:00.280879 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:21:00.281515 kubelet[2391]: I0128 01:21:00.281066 2391 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:21:00.281631 kubelet[2391]: W0128 01:21:00.281536 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:00.281687 kubelet[2391]: E0128 01:21:00.281637 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:00.281807 kubelet[2391]: E0128 01:21:00.276356 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec068df7c4c7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:21:00.261780606 +0000 UTC m=+1.566368550,LastTimestamp:2026-01-28 01:21:00.261780606 +0000 UTC m=+1.566368550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:21:00.282087 kubelet[2391]: E0128 01:21:00.282029 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:21:00.282537 kubelet[2391]: E0128 01:21:00.282262 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Jan 28 01:21:00.292315 kubelet[2391]: I0128 01:21:00.292140 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:21:00.295129 kubelet[2391]: I0128 01:21:00.294815 2391 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:21:00.295129 kubelet[2391]: I0128 01:21:00.294853 2391 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:21:00.295635 kubelet[2391]: E0128 01:21:00.295504 2391 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:21:00.365254 kubelet[2391]: I0128 01:21:00.364992 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:21:00.370131 kubelet[2391]: I0128 01:21:00.370068 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:21:00.370346 kubelet[2391]: I0128 01:21:00.370212 2391 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:21:00.370346 kubelet[2391]: I0128 01:21:00.370319 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:21:00.370346 kubelet[2391]: I0128 01:21:00.370332 2391 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:21:00.370540 kubelet[2391]: E0128 01:21:00.370457 2391 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:21:00.371126 kubelet[2391]: W0128 01:21:00.371072 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:00.371126 kubelet[2391]: E0128 01:21:00.371116 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:00.377554 kubelet[2391]: I0128 01:21:00.377422 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:21:00.377554 kubelet[2391]: I0128 01:21:00.377443 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:21:00.377554 kubelet[2391]: I0128 01:21:00.377504 2391 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:21:00.382682 kubelet[2391]: E0128 01:21:00.382149 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:21:00.388284 kubelet[2391]: I0128 01:21:00.387500 2391 policy_none.go:49] "None policy: Start" Jan 28 01:21:00.388375 kubelet[2391]: I0128 01:21:00.388358 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:21:00.388747 kubelet[2391]: I0128 01:21:00.388461 2391 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:21:00.409229 kubelet[2391]: I0128 01:21:00.406893 2391 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:21:00.409229 kubelet[2391]: I0128 01:21:00.407313 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:21:00.409229 kubelet[2391]: I0128 01:21:00.407353 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:21:00.409229 kubelet[2391]: I0128 01:21:00.408072 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:21:00.413323 kubelet[2391]: E0128 01:21:00.413266 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:21:00.414861 kubelet[2391]: E0128 01:21:00.414417 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:21:00.482956 kubelet[2391]: E0128 01:21:00.482873 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Jan 28 01:21:00.502151 kubelet[2391]: E0128 01:21:00.499376 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:00.505093 kubelet[2391]: E0128 01:21:00.503049 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:00.505093 kubelet[2391]: E0128 01:21:00.504890 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:00.510113 kubelet[2391]: I0128 01:21:00.510035 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:00.511293 kubelet[2391]: E0128 01:21:00.511174 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 28 01:21:00.586039 kubelet[2391]: I0128 01:21:00.585386 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:00.586039 kubelet[2391]: I0128 01:21:00.585468 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:00.586039 kubelet[2391]: I0128 01:21:00.585506 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:00.586039 kubelet[2391]: I0128 01:21:00.585535 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:00.586039 kubelet[2391]: I0128 01:21:00.585558 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:00.586500 kubelet[2391]: I0128 01:21:00.585637 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:00.586500 kubelet[2391]: I0128 01:21:00.585667 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:00.586500 kubelet[2391]: I0128 01:21:00.585695 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:00.586500 kubelet[2391]: I0128 01:21:00.585725 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:00.715248 kubelet[2391]: I0128 01:21:00.714571 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:00.715248 kubelet[2391]: E0128 01:21:00.715011 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 28 01:21:00.803408 kubelet[2391]: E0128 01:21:00.802247 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:00.804000 kubelet[2391]: E0128 01:21:00.803767 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:00.806399 containerd[1563]: time="2026-01-28T01:21:00.804073330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b04009aecd533090299b249e42c6d46c,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:00.808557 containerd[1563]: time="2026-01-28T01:21:00.808104528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:00.809087 kubelet[2391]: E0128 01:21:00.808804 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:00.811332 containerd[1563]: time="2026-01-28T01:21:00.809357334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:00.884732 kubelet[2391]: E0128 01:21:00.884276 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Jan 28 01:21:01.061740 kubelet[2391]: W0128 01:21:01.060242 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:01.061740 kubelet[2391]: E0128 01:21:01.060375 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:01.120331 kubelet[2391]: I0128 01:21:01.119780 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:01.120331 kubelet[2391]: E0128 01:21:01.120148 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 28 01:21:01.480710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283744958.mount: Deactivated successfully. Jan 28 01:21:01.483147 kubelet[2391]: W0128 01:21:01.482800 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:01.483147 kubelet[2391]: E0128 01:21:01.482914 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:01.496235 containerd[1563]: time="2026-01-28T01:21:01.496110174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:01.501348 containerd[1563]: time="2026-01-28T01:21:01.501026548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:21:01.508304 containerd[1563]: time="2026-01-28T01:21:01.504653769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:01.515544 containerd[1563]: time="2026-01-28T01:21:01.513973355Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:01.516722 containerd[1563]: time="2026-01-28T01:21:01.516662557Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:01.521018 containerd[1563]: time="2026-01-28T01:21:01.520876712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:21:01.526632 containerd[1563]: time="2026-01-28T01:21:01.524777708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 01:21:01.537105 containerd[1563]: time="2026-01-28T01:21:01.536208556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 728.044008ms" Jan 28 01:21:01.543571 containerd[1563]: time="2026-01-28T01:21:01.538056170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.622837ms" Jan 28 01:21:01.543571 containerd[1563]: time="2026-01-28T01:21:01.538699639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:21:01.543571 containerd[1563]: time="2026-01-28T01:21:01.541370672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.945462ms" Jan 28 01:21:01.685846 kubelet[2391]: E0128 01:21:01.685762 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Jan 28 01:21:01.824736 kubelet[2391]: W0128 01:21:01.823074 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:01.824736 kubelet[2391]: E0128 01:21:01.823219 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:01.925253 kubelet[2391]: I0128 01:21:01.924572 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:01.925253 kubelet[2391]: E0128 01:21:01.925088 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 28 01:21:02.146785 kubelet[2391]: W0128 01:21:02.144231 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:02.146785 kubelet[2391]: E0128 01:21:02.144458 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:02.188538 containerd[1563]: time="2026-01-28T01:21:02.186527729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:02.188538 containerd[1563]: time="2026-01-28T01:21:02.186664132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:02.188538 containerd[1563]: time="2026-01-28T01:21:02.186684338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:02.188538 containerd[1563]: time="2026-01-28T01:21:02.186829106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:02.197258 containerd[1563]: time="2026-01-28T01:21:02.192393543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:02.203568 containerd[1563]: time="2026-01-28T01:21:02.192479001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:02.203568 containerd[1563]: time="2026-01-28T01:21:02.202788352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:02.203568 containerd[1563]: time="2026-01-28T01:21:02.202992430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:02.208206 containerd[1563]: time="2026-01-28T01:21:02.205565633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:02.208206 containerd[1563]: time="2026-01-28T01:21:02.205744615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:02.208206 containerd[1563]: time="2026-01-28T01:21:02.205771915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:02.208206 containerd[1563]: time="2026-01-28T01:21:02.205890714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:02.668183 kubelet[2391]: E0128 01:21:02.505017 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:02.907239 containerd[1563]: time="2026-01-28T01:21:02.907122728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b04009aecd533090299b249e42c6d46c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1cc56c5557b0ee099bd27f554695260dc25aada43f815d9297ded17a242d9f\"" Jan 28 01:21:02.909897 kubelet[2391]: E0128 01:21:02.909515 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:02.912362 containerd[1563]: time="2026-01-28T01:21:02.912276732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"529cf66c1a3e37eb75cef1790730f349da4424939ba336fd207e46388cd4ca40\"" Jan 28 01:21:02.914788 containerd[1563]: time="2026-01-28T01:21:02.914711563Z" level=info msg="CreateContainer within sandbox \"8c1cc56c5557b0ee099bd27f554695260dc25aada43f815d9297ded17a242d9f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:21:02.915687 kubelet[2391]: E0128 01:21:02.915364 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:02.917328 containerd[1563]: time="2026-01-28T01:21:02.917272833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ebfdb73deca291472ad0ad012b2c1bfd997cd7977694f7c896e17471c0f6072\"" Jan 28 01:21:02.919185 kubelet[2391]: E0128 01:21:02.918731 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:02.923096 containerd[1563]: time="2026-01-28T01:21:02.922179942Z" level=info msg="CreateContainer within sandbox \"529cf66c1a3e37eb75cef1790730f349da4424939ba336fd207e46388cd4ca40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:21:02.924213 containerd[1563]: time="2026-01-28T01:21:02.924179618Z" level=info msg="CreateContainer within sandbox \"6ebfdb73deca291472ad0ad012b2c1bfd997cd7977694f7c896e17471c0f6072\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:21:02.947477 kubelet[2391]: E0128 01:21:02.947386 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec068df7c4c7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:21:00.261780606 +0000 UTC m=+1.566368550,LastTimestamp:2026-01-28 01:21:00.261780606 +0000 UTC m=+1.566368550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:21:02.958633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576284969.mount: Deactivated successfully. Jan 28 01:21:03.002349 containerd[1563]: time="2026-01-28T01:21:03.002233662Z" level=info msg="CreateContainer within sandbox \"529cf66c1a3e37eb75cef1790730f349da4424939ba336fd207e46388cd4ca40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"073fb7a009fd5b5c13294b8532da33c0d88104522d9f09f678380112f85dd1e8\"" Jan 28 01:21:03.004224 containerd[1563]: time="2026-01-28T01:21:03.003980248Z" level=info msg="StartContainer for \"073fb7a009fd5b5c13294b8532da33c0d88104522d9f09f678380112f85dd1e8\"" Jan 28 01:21:03.004224 containerd[1563]: time="2026-01-28T01:21:03.004025088Z" level=info msg="CreateContainer within sandbox \"8c1cc56c5557b0ee099bd27f554695260dc25aada43f815d9297ded17a242d9f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2488655f81d97b4cc34298977633371664c384bc0d96e1e26728aa80527aea48\"" Jan 28 01:21:03.004614 containerd[1563]: time="2026-01-28T01:21:03.004520244Z" level=info msg="StartContainer for \"2488655f81d97b4cc34298977633371664c384bc0d96e1e26728aa80527aea48\"" Jan 28 01:21:03.021008 containerd[1563]: time="2026-01-28T01:21:03.020760405Z" level=info msg="CreateContainer within sandbox \"6ebfdb73deca291472ad0ad012b2c1bfd997cd7977694f7c896e17471c0f6072\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bbad79fe6025a34cc971a662a559a37d5ebbb3e54f5385b88b6b9307d7b5e7f5\"" Jan 28 01:21:03.022268 containerd[1563]: time="2026-01-28T01:21:03.021870579Z" level=info msg="StartContainer for \"bbad79fe6025a34cc971a662a559a37d5ebbb3e54f5385b88b6b9307d7b5e7f5\"" Jan 28 01:21:03.190849 kubelet[2391]: W0128 01:21:03.187827 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:03.190849 kubelet[2391]: E0128 01:21:03.187876 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:03.287331 kubelet[2391]: E0128 01:21:03.287224 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="3.2s" Jan 28 01:21:03.340893 containerd[1563]: time="2026-01-28T01:21:03.340741822Z" level=info msg="StartContainer for \"073fb7a009fd5b5c13294b8532da33c0d88104522d9f09f678380112f85dd1e8\" returns successfully" Jan 28 01:21:03.342082 containerd[1563]: time="2026-01-28T01:21:03.342050835Z" level=info msg="StartContainer for \"2488655f81d97b4cc34298977633371664c384bc0d96e1e26728aa80527aea48\" returns successfully" Jan 28 01:21:03.462864 containerd[1563]: time="2026-01-28T01:21:03.461904420Z" level=info msg="StartContainer for \"bbad79fe6025a34cc971a662a559a37d5ebbb3e54f5385b88b6b9307d7b5e7f5\" returns successfully" Jan 28 01:21:03.463901 kubelet[2391]: W0128 01:21:03.458532 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 28 01:21:03.463901 kubelet[2391]: E0128 01:21:03.463750 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:21:03.537624 kubelet[2391]: I0128 01:21:03.536791 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:03.537758 kubelet[2391]: E0128 01:21:03.537691 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 28 01:21:03.667642 kubelet[2391]: E0128 01:21:03.667519 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:03.667775 kubelet[2391]: E0128 01:21:03.667757 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:03.672663 kubelet[2391]: E0128 01:21:03.672460 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:03.672663 kubelet[2391]: E0128 01:21:03.672662 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:03.677360 kubelet[2391]: E0128 01:21:03.676727 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:03.677360 kubelet[2391]: E0128 01:21:03.676888 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:04.842079 kubelet[2391]: E0128 01:21:04.831715 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:04.842079 kubelet[2391]: E0128 01:21:04.832058 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:04.842079 kubelet[2391]: E0128 01:21:04.832411 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:04.842079 kubelet[2391]: E0128 01:21:04.833457 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:05.956621 kubelet[2391]: E0128 01:21:05.954712 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:05.956621 kubelet[2391]: E0128 01:21:05.954991 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:06.020087 kubelet[2391]: E0128 01:21:06.019891 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:06.020446 kubelet[2391]: E0128 01:21:06.020304 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:06.757325 kubelet[2391]: I0128 01:21:06.754975 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:10.498548 kubelet[2391]: E0128 01:21:10.497111 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:21:11.055389 kubelet[2391]: E0128 01:21:11.055350 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:21:11.057038 kubelet[2391]: E0128 01:21:11.057014 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:13.208502 kubelet[2391]: E0128 01:21:13.207675 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:21:13.403756 kubelet[2391]: I0128 01:21:13.402289 2391 apiserver.go:52] "Watching apiserver" Jan 28 01:21:13.471112 kubelet[2391]: I0128 01:21:13.466468 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:21:13.488517 kubelet[2391]: I0128 01:21:13.483678 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:13.488517 kubelet[2391]: I0128 01:21:13.484873 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:21:14.365645 kubelet[2391]: E0128 01:21:14.362905 2391 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec068df7c4c7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:21:00.261780606 +0000 UTC m=+1.566368550,LastTimestamp:2026-01-28 01:21:00.261780606 +0000 UTC m=+1.566368550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:21:14.365645 kubelet[2391]: E0128 01:21:14.364510 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:14.365645 kubelet[2391]: I0128 01:21:14.364540 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:14.370113 kubelet[2391]: E0128 01:21:14.369040 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:14.370113 kubelet[2391]: I0128 01:21:14.369095 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:14.374106 kubelet[2391]: E0128 01:21:14.374064 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:16.175280 kubelet[2391]: I0128 01:21:16.172979 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:16.202432 kubelet[2391]: E0128 01:21:16.202391 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:17.195670 kubelet[2391]: E0128 01:21:17.191240 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:19.665440 systemd[1]: Reloading requested from client PID 2666 ('systemctl') (unit session-9.scope)... Jan 28 01:21:19.665479 systemd[1]: Reloading... Jan 28 01:21:19.787644 zram_generator::config[2708]: No configuration found. Jan 28 01:21:19.974562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:21:20.101003 systemd[1]: Reloading finished in 434 ms. Jan 28 01:21:20.167129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:20.197762 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:21:20.198630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:20.212260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:20.465367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:20.476858 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:21:20.651909 kubelet[2759]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:21:20.651909 kubelet[2759]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:21:20.651909 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:21:20.651909 kubelet[2759]: I0128 01:21:20.651129 2759 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:21:20.672695 kubelet[2759]: I0128 01:21:20.671256 2759 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:21:20.672695 kubelet[2759]: I0128 01:21:20.671302 2759 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:21:20.673958 kubelet[2759]: I0128 01:21:20.673749 2759 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:21:20.676133 kubelet[2759]: I0128 01:21:20.676028 2759 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:21:20.680480 kubelet[2759]: I0128 01:21:20.679902 2759 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:21:20.688388 kubelet[2759]: E0128 01:21:20.688209 2759 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:21:20.688388 kubelet[2759]: I0128 01:21:20.688272 2759 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:21:20.701250 kubelet[2759]: I0128 01:21:20.701137 2759 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:21:20.706164 kubelet[2759]: I0128 01:21:20.702786 2759 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:21:20.706164 kubelet[2759]: I0128 01:21:20.702844 2759 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:21:20.706164 kubelet[2759]: I0128 01:21:20.703268 2759 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:21:20.706164 kubelet[2759]: I0128 01:21:20.703290 2759 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:21:20.706502 kubelet[2759]: I0128 01:21:20.703372 2759 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:21:20.706502 kubelet[2759]: I0128 01:21:20.703671 2759 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:21:20.706502 kubelet[2759]: I0128 01:21:20.703702 2759 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:21:20.706502 kubelet[2759]: I0128 01:21:20.703733 2759 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:21:20.706502 kubelet[2759]: I0128 01:21:20.703751 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:21:20.706775 kubelet[2759]: I0128 01:21:20.706679 2759 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:21:20.708852 kubelet[2759]: I0128 01:21:20.708379 2759 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:21:20.711333 kubelet[2759]: I0128 01:21:20.711312 2759 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:21:20.722889 kubelet[2759]: I0128 01:21:20.713296 2759 server.go:1287] "Started kubelet" Jan 28 01:21:20.728268 kubelet[2759]: I0128 01:21:20.728021 2759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:21:20.744929 kubelet[2759]: I0128 01:21:20.741521 2759 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:21:20.744929 kubelet[2759]: I0128 01:21:20.741755 2759 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:21:20.744929 kubelet[2759]: I0128 01:21:20.741979 2759 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:21:20.744929 kubelet[2759]: I0128 01:21:20.743357 2759 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:21:20.744929 kubelet[2759]: I0128 01:21:20.744556 2759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:21:20.747686 kubelet[2759]: I0128 01:21:20.745845 2759 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:21:20.747686 kubelet[2759]: I0128 01:21:20.746511 2759 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:21:20.749954 kubelet[2759]: I0128 01:21:20.749643 2759 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:21:20.762133 kubelet[2759]: I0128 01:21:20.762059 2759 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:21:20.762266 kubelet[2759]: I0128 01:21:20.762209 2759 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:21:20.763503 kubelet[2759]: E0128 01:21:20.763447 2759 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:21:20.765079 kubelet[2759]: I0128 01:21:20.764446 2759 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:21:20.780271 kubelet[2759]: I0128 01:21:20.780194 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:21:20.792637 kubelet[2759]: I0128 01:21:20.792522 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:21:20.792808 kubelet[2759]: I0128 01:21:20.792676 2759 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:21:20.792808 kubelet[2759]: I0128 01:21:20.792711 2759 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:21:20.792808 kubelet[2759]: I0128 01:21:20.792724 2759 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:21:20.792920 kubelet[2759]: E0128 01:21:20.792837 2759 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:21:20.894746 kubelet[2759]: E0128 01:21:20.894356 2759 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:21:20.898822 kubelet[2759]: I0128 01:21:20.898802 2759 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.898934 2759 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.898962 2759 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.899253 2759 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.899270 2759 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.899296 2759 policy_none.go:49] "None policy: Start" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.899312 2759 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.899330 2759 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:21:20.900468 kubelet[2759]: I0128 01:21:20.899489 2759 state_mem.go:75] "Updated machine memory state" Jan 28 01:21:20.905018 kubelet[2759]: I0128 01:21:20.903532 2759 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:21:20.906756 kubelet[2759]: I0128 01:21:20.906629 2759 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:21:20.906756 kubelet[2759]: I0128 01:21:20.906691 2759 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:21:20.909246 kubelet[2759]: I0128 01:21:20.907956 2759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:21:20.910092 kubelet[2759]: E0128 01:21:20.910070 2759 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:21:20.923321 sudo[2794]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 01:21:20.924500 sudo[2794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 01:21:21.024283 kubelet[2759]: I0128 01:21:21.023771 2759 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:21:21.076523 kubelet[2759]: I0128 01:21:21.076056 2759 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:21:21.076523 kubelet[2759]: I0128 01:21:21.076178 2759 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:21:21.099370 kubelet[2759]: I0128 01:21:21.096560 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:21.099370 kubelet[2759]: I0128 01:21:21.096905 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:21.099370 kubelet[2759]: I0128 01:21:21.098739 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:21.137431 kubelet[2759]: E0128 01:21:21.137257 2759 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:21.145541 kubelet[2759]: I0128 01:21:21.145464 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:21.145541 kubelet[2759]: I0128 01:21:21.145538 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:21.145789 kubelet[2759]: I0128 01:21:21.145570 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:21.145789 kubelet[2759]: I0128 01:21:21.145657 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:21.145789 kubelet[2759]: I0128 01:21:21.145686 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:21.145789 kubelet[2759]: I0128 01:21:21.145710 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:21.145789 kubelet[2759]: I0128 01:21:21.145732 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:21.146672 kubelet[2759]: I0128 01:21:21.145757 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:21.146672 kubelet[2759]: I0128 01:21:21.145782 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:21:21.437022 kubelet[2759]: E0128 01:21:21.435917 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.442965 kubelet[2759]: E0128 01:21:21.439909 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.442965 kubelet[2759]: E0128 01:21:21.440093 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.704284 kubelet[2759]: I0128 01:21:21.704199 2759 apiserver.go:52] "Watching apiserver" Jan 28 01:21:21.743072 kubelet[2759]: I0128 01:21:21.742743 2759 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:21:21.820633 kubelet[2759]: I0128 01:21:21.818357 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:21.820633 kubelet[2759]: I0128 01:21:21.818653 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:21.820829 kubelet[2759]: E0128 01:21:21.820680 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.849908 kubelet[2759]: E0128 01:21:21.846891 2759 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 01:21:21.849908 kubelet[2759]: E0128 01:21:21.847130 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.849908 kubelet[2759]: E0128 01:21:21.847782 2759 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 01:21:21.849908 kubelet[2759]: E0128 01:21:21.847915 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.870777 kubelet[2759]: I0128 01:21:21.870682 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.870633137 podStartE2EDuration="870.633137ms" podCreationTimestamp="2026-01-28 01:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:21:21.869111488 +0000 UTC m=+1.378882126" watchObservedRunningTime="2026-01-28 01:21:21.870633137 +0000 UTC m=+1.380403765" Jan 28 01:21:21.905480 kubelet[2759]: I0128 01:21:21.905300 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.905279936 podStartE2EDuration="905.279936ms" podCreationTimestamp="2026-01-28 01:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:21:21.90499359 +0000 UTC m=+1.414764228" watchObservedRunningTime="2026-01-28 01:21:21.905279936 +0000 UTC m=+1.415050555" Jan 28 01:21:21.905982 kubelet[2759]: I0128 01:21:21.905515 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.905507344 podStartE2EDuration="5.905507344s" podCreationTimestamp="2026-01-28 01:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:21:21.892057507 +0000 UTC m=+1.401828125" watchObservedRunningTime="2026-01-28 01:21:21.905507344 +0000 UTC m=+1.415277972" Jan 28 01:21:22.018809 sudo[2794]: pam_unix(sudo:session): session closed for user root Jan 28 01:21:22.830682 kubelet[2759]: E0128 01:21:22.830135 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:22.830682 kubelet[2759]: E0128 01:21:22.830242 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:23.835354 kubelet[2759]: E0128 01:21:23.834883 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:23.922257 kubelet[2759]: I0128 01:21:23.921927 2759 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:21:23.924829 containerd[1563]: time="2026-01-28T01:21:23.924488961Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:21:23.926841 kubelet[2759]: I0128 01:21:23.926805 2759 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:21:24.987290 kubelet[2759]: I0128 01:21:24.986084 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9pc\" (UniqueName: \"kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-kube-api-access-vh9pc\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.987290 kubelet[2759]: I0128 01:21:24.986155 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-hostproc\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.987290 kubelet[2759]: I0128 01:21:24.986179 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-run\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.987290 kubelet[2759]: I0128 01:21:24.986198 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-etc-cni-netd\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.987290 kubelet[2759]: I0128 01:21:24.986218 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cni-path\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.987290 kubelet[2759]: I0128 01:21:24.986237 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-net\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.988706 kubelet[2759]: I0128 01:21:24.986256 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-hubble-tls\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.988706 kubelet[2759]: I0128 01:21:24.986276 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82a64e42-04eb-4eb6-8e7b-6641864556c1-clustermesh-secrets\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.988706 kubelet[2759]: I0128 01:21:24.986298 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-kernel\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.988706 kubelet[2759]: I0128 01:21:24.986325 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95cc6a28-760a-4e9f-8edb-04037990e2e1-xtables-lock\") pod \"kube-proxy-dbrph\" (UID: \"95cc6a28-760a-4e9f-8edb-04037990e2e1\") " pod="kube-system/kube-proxy-dbrph" Jan 28 01:21:24.988706 kubelet[2759]: I0128 01:21:24.986348 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95cc6a28-760a-4e9f-8edb-04037990e2e1-lib-modules\") pod \"kube-proxy-dbrph\" (UID: \"95cc6a28-760a-4e9f-8edb-04037990e2e1\") " pod="kube-system/kube-proxy-dbrph" Jan 28 01:21:24.988706 kubelet[2759]: I0128 01:21:24.986370 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-bpf-maps\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.992773 kubelet[2759]: I0128 01:21:24.986388 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-lib-modules\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.992773 kubelet[2759]: I0128 01:21:24.986770 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-xtables-lock\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.992773 kubelet[2759]: I0128 01:21:24.986899 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95cc6a28-760a-4e9f-8edb-04037990e2e1-kube-proxy\") pod \"kube-proxy-dbrph\" (UID: \"95cc6a28-760a-4e9f-8edb-04037990e2e1\") " pod="kube-system/kube-proxy-dbrph" Jan 28 01:21:24.992773 kubelet[2759]: I0128 01:21:24.986971 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvww7\" (UniqueName: \"kubernetes.io/projected/95cc6a28-760a-4e9f-8edb-04037990e2e1-kube-api-access-pvww7\") pod \"kube-proxy-dbrph\" (UID: \"95cc6a28-760a-4e9f-8edb-04037990e2e1\") " pod="kube-system/kube-proxy-dbrph" Jan 28 01:21:24.992773 kubelet[2759]: I0128 01:21:24.987007 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-cgroup\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:24.992963 kubelet[2759]: I0128 01:21:24.987034 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-config-path\") pod \"cilium-c8fpj\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " pod="kube-system/cilium-c8fpj" Jan 28 01:21:25.088693 kubelet[2759]: I0128 01:21:25.088202 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6s4\" (UniqueName: \"kubernetes.io/projected/809f5307-df19-4d15-8b0c-5b85589c0b89-kube-api-access-hw6s4\") pod \"cilium-operator-6c4d7847fc-6nrv7\" (UID: \"809f5307-df19-4d15-8b0c-5b85589c0b89\") " pod="kube-system/cilium-operator-6c4d7847fc-6nrv7" Jan 28 01:21:25.088693 kubelet[2759]: I0128 01:21:25.088519 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/809f5307-df19-4d15-8b0c-5b85589c0b89-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6nrv7\" (UID: \"809f5307-df19-4d15-8b0c-5b85589c0b89\") " pod="kube-system/cilium-operator-6c4d7847fc-6nrv7" Jan 28 01:21:26.006341 kubelet[2759]: E0128 01:21:26.006301 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:26.009141 containerd[1563]: time="2026-01-28T01:21:26.008778496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbrph,Uid:95cc6a28-760a-4e9f-8edb-04037990e2e1,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:26.019333 kubelet[2759]: E0128 01:21:26.018958 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:26.021250 containerd[1563]: time="2026-01-28T01:21:26.021143833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8fpj,Uid:82a64e42-04eb-4eb6-8e7b-6641864556c1,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:26.084118 kubelet[2759]: E0128 01:21:26.084008 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:26.166568 kubelet[2759]: E0128 01:21:26.165707 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:26.168701 containerd[1563]: time="2026-01-28T01:21:26.167416895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6nrv7,Uid:809f5307-df19-4d15-8b0c-5b85589c0b89,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:26.284056 containerd[1563]: time="2026-01-28T01:21:26.281266697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:26.284056 containerd[1563]: time="2026-01-28T01:21:26.281567624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:26.284056 containerd[1563]: time="2026-01-28T01:21:26.281637154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:26.284056 containerd[1563]: time="2026-01-28T01:21:26.281978546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:26.289406 containerd[1563]: time="2026-01-28T01:21:26.289229919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:26.290751 sudo[1782]: pam_unix(sudo:session): session closed for user root Jan 28 01:21:26.293051 containerd[1563]: time="2026-01-28T01:21:26.291459467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:26.293051 containerd[1563]: time="2026-01-28T01:21:26.291530580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:26.293051 containerd[1563]: time="2026-01-28T01:21:26.291691325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:26.317301 sshd[1778]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:26.335727 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:60238.service: Deactivated successfully. Jan 28 01:21:26.384904 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:21:26.400143 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:21:26.424286 systemd-logind[1539]: Removed session 9. Jan 28 01:21:26.489924 containerd[1563]: time="2026-01-28T01:21:26.483539834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:26.489924 containerd[1563]: time="2026-01-28T01:21:26.483643667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:26.489924 containerd[1563]: time="2026-01-28T01:21:26.483676178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:26.489924 containerd[1563]: time="2026-01-28T01:21:26.483808291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:26.611969 containerd[1563]: time="2026-01-28T01:21:26.610216522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbrph,Uid:95cc6a28-760a-4e9f-8edb-04037990e2e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"312fb83af2a33f9b871910d1cab2f52680404bff3145352b350e576f8e5da9ae\"" Jan 28 01:21:26.612992 kubelet[2759]: E0128 01:21:26.612902 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:26.617996 containerd[1563]: time="2026-01-28T01:21:26.616810483Z" level=info msg="CreateContainer within sandbox \"312fb83af2a33f9b871910d1cab2f52680404bff3145352b350e576f8e5da9ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:21:26.663410 containerd[1563]: time="2026-01-28T01:21:26.661947296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8fpj,Uid:82a64e42-04eb-4eb6-8e7b-6641864556c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\"" Jan 28 01:21:26.665161 kubelet[2759]: E0128 01:21:26.664823 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:26.668546 containerd[1563]: time="2026-01-28T01:21:26.668301734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 01:21:26.744457 containerd[1563]: time="2026-01-28T01:21:26.740170906Z" level=info msg="CreateContainer within sandbox \"312fb83af2a33f9b871910d1cab2f52680404bff3145352b350e576f8e5da9ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2691fdc5012e3df9007f4c8645af2c07967826b635de304f6c59b72c1dd5fa4a\"" Jan 28 01:21:26.757991 containerd[1563]: time="2026-01-28T01:21:26.752995682Z" level=info msg="StartContainer for \"2691fdc5012e3df9007f4c8645af2c07967826b635de304f6c59b72c1dd5fa4a\"" Jan 28 01:21:26.857865 containerd[1563]: time="2026-01-28T01:21:26.857504307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6nrv7,Uid:809f5307-df19-4d15-8b0c-5b85589c0b89,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\"" Jan 28 01:21:26.859740 kubelet[2759]: E0128 01:21:26.859558 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:27.054610 kubelet[2759]: E0128 01:21:27.054386 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:27.157545 containerd[1563]: time="2026-01-28T01:21:27.157428427Z" level=info msg="StartContainer for \"2691fdc5012e3df9007f4c8645af2c07967826b635de304f6c59b72c1dd5fa4a\" returns successfully" Jan 28 01:21:28.091964 kubelet[2759]: E0128 01:21:28.090126 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:29.191787 kubelet[2759]: E0128 01:21:29.190657 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:29.796267 kubelet[2759]: E0128 01:21:29.789817 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:29.919137 kubelet[2759]: I0128 01:21:29.919065 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dbrph" podStartSLOduration=5.918962542 podStartE2EDuration="5.918962542s" podCreationTimestamp="2026-01-28 01:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:21:28.146451644 +0000 UTC m=+7.656222282" watchObservedRunningTime="2026-01-28 01:21:29.918962542 +0000 UTC m=+9.428733160" Jan 28 01:21:30.130525 kubelet[2759]: E0128 01:21:30.129168 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:31.134639 kubelet[2759]: E0128 01:21:31.132979 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:31.565453 kubelet[2759]: E0128 01:21:31.564937 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:32.139996 kubelet[2759]: E0128 01:21:32.139509 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:34.680628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274304033.mount: Deactivated successfully. Jan 28 01:21:39.450205 containerd[1563]: time="2026-01-28T01:21:39.445680473Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:39.450205 containerd[1563]: time="2026-01-28T01:21:39.448184647Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 28 01:21:39.452042 containerd[1563]: time="2026-01-28T01:21:39.451931593Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:39.455800 containerd[1563]: time="2026-01-28T01:21:39.454917214Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.786569182s" Jan 28 01:21:39.455800 containerd[1563]: time="2026-01-28T01:21:39.454996879Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 28 01:21:39.459930 containerd[1563]: time="2026-01-28T01:21:39.459691239Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 01:21:39.461629 containerd[1563]: time="2026-01-28T01:21:39.461500898Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 01:21:39.488804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225315193.mount: Deactivated successfully. Jan 28 01:21:39.494995 containerd[1563]: time="2026-01-28T01:21:39.493945214Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\"" Jan 28 01:21:39.494995 containerd[1563]: time="2026-01-28T01:21:39.494812039Z" level=info msg="StartContainer for \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\"" Jan 28 01:21:39.614144 containerd[1563]: time="2026-01-28T01:21:39.613950730Z" level=info msg="StartContainer for \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\" returns successfully" Jan 28 01:21:39.875963 containerd[1563]: time="2026-01-28T01:21:39.875793415Z" level=info msg="shim disconnected" id=cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322 namespace=k8s.io Jan 28 01:21:39.876646 containerd[1563]: time="2026-01-28T01:21:39.876301096Z" level=warning msg="cleaning up after shim disconnected" id=cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322 namespace=k8s.io Jan 28 01:21:39.876646 containerd[1563]: time="2026-01-28T01:21:39.876361962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:21:39.910832 containerd[1563]: time="2026-01-28T01:21:39.910759046Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:21:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:21:40.184680 kubelet[2759]: E0128 01:21:40.184424 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:40.188537 containerd[1563]: time="2026-01-28T01:21:40.188453954Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 01:21:40.335859 containerd[1563]: time="2026-01-28T01:21:40.335702941Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\"" Jan 28 01:21:40.351991 containerd[1563]: time="2026-01-28T01:21:40.351391803Z" level=info msg="StartContainer for \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\"" Jan 28 01:21:40.485518 containerd[1563]: time="2026-01-28T01:21:40.485440127Z" level=info msg="StartContainer for \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\" returns successfully" Jan 28 01:21:40.497327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322-rootfs.mount: Deactivated successfully. Jan 28 01:21:40.515100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397655189.mount: Deactivated successfully. Jan 28 01:21:40.530922 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:21:40.531389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:21:40.531492 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:21:40.564799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:21:40.615818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838-rootfs.mount: Deactivated successfully. Jan 28 01:21:40.624863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:21:40.674101 containerd[1563]: time="2026-01-28T01:21:40.673824175Z" level=info msg="shim disconnected" id=5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838 namespace=k8s.io Jan 28 01:21:40.674101 containerd[1563]: time="2026-01-28T01:21:40.673888639Z" level=warning msg="cleaning up after shim disconnected" id=5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838 namespace=k8s.io Jan 28 01:21:40.674101 containerd[1563]: time="2026-01-28T01:21:40.673903009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:21:41.189004 kubelet[2759]: E0128 01:21:41.188640 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:41.192941 containerd[1563]: time="2026-01-28T01:21:41.192756546Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 01:21:41.231143 containerd[1563]: time="2026-01-28T01:21:41.230988305Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\"" Jan 28 01:21:41.237197 containerd[1563]: time="2026-01-28T01:21:41.237090951Z" level=info msg="StartContainer for \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\"" Jan 28 01:21:41.359737 containerd[1563]: time="2026-01-28T01:21:41.359687532Z" level=info msg="StartContainer for \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\" returns successfully" Jan 28 01:21:41.473277 containerd[1563]: time="2026-01-28T01:21:41.473000904Z" level=info msg="shim disconnected" id=a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07 namespace=k8s.io Jan 28 01:21:41.473277 containerd[1563]: time="2026-01-28T01:21:41.473059254Z" level=warning msg="cleaning up after shim disconnected" id=a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07 namespace=k8s.io Jan 28 01:21:41.473277 containerd[1563]: time="2026-01-28T01:21:41.473071610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:21:41.501638 containerd[1563]: time="2026-01-28T01:21:41.501485016Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:21:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:21:41.614781 containerd[1563]: time="2026-01-28T01:21:41.614568670Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:41.615657 containerd[1563]: time="2026-01-28T01:21:41.615536764Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 28 01:21:41.617437 containerd[1563]: time="2026-01-28T01:21:41.617314168Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:41.620181 containerd[1563]: time="2026-01-28T01:21:41.620058592Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.160329876s" Jan 28 01:21:41.620181 containerd[1563]: time="2026-01-28T01:21:41.620140521Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 28 01:21:41.629479 containerd[1563]: time="2026-01-28T01:21:41.628812732Z" level=info msg="CreateContainer within sandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 01:21:41.660386 containerd[1563]: time="2026-01-28T01:21:41.658569149Z" level=info msg="CreateContainer within sandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\"" Jan 28 01:21:41.660386 containerd[1563]: time="2026-01-28T01:21:41.660133174Z" level=info msg="StartContainer for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\"" Jan 28 01:21:41.802412 containerd[1563]: time="2026-01-28T01:21:41.802090097Z" level=info msg="StartContainer for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" returns successfully" Jan 28 01:21:42.193254 kubelet[2759]: E0128 01:21:42.193055 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:42.196066 kubelet[2759]: E0128 01:21:42.196007 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:42.198429 containerd[1563]: time="2026-01-28T01:21:42.198373569Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 01:21:42.262058 containerd[1563]: time="2026-01-28T01:21:42.261977288Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\"" Jan 28 01:21:42.266058 containerd[1563]: time="2026-01-28T01:21:42.265988684Z" level=info msg="StartContainer for \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\"" Jan 28 01:21:42.328398 kubelet[2759]: I0128 01:21:42.328314 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6nrv7" podStartSLOduration=3.590667513 podStartE2EDuration="18.328298111s" podCreationTimestamp="2026-01-28 01:21:24 +0000 UTC" firstStartedPulling="2026-01-28 01:21:26.885782688 +0000 UTC m=+6.395553306" lastFinishedPulling="2026-01-28 01:21:41.623413276 +0000 UTC m=+21.133183904" observedRunningTime="2026-01-28 01:21:42.327754153 +0000 UTC m=+21.837524771" watchObservedRunningTime="2026-01-28 01:21:42.328298111 +0000 UTC m=+21.838068730" Jan 28 01:21:42.446381 containerd[1563]: time="2026-01-28T01:21:42.445797149Z" level=info msg="StartContainer for \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\" returns successfully" Jan 28 01:21:42.536667 containerd[1563]: time="2026-01-28T01:21:42.533711656Z" level=info msg="shim disconnected" id=6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa namespace=k8s.io Jan 28 01:21:42.538831 containerd[1563]: time="2026-01-28T01:21:42.538649952Z" level=warning msg="cleaning up after shim disconnected" id=6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa namespace=k8s.io Jan 28 01:21:42.538906 containerd[1563]: time="2026-01-28T01:21:42.538803948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:21:43.207761 kubelet[2759]: E0128 01:21:43.206364 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:43.207761 kubelet[2759]: E0128 01:21:43.206884 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:43.216036 containerd[1563]: time="2026-01-28T01:21:43.215336733Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 01:21:43.254391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197749848.mount: Deactivated successfully. Jan 28 01:21:43.261661 containerd[1563]: time="2026-01-28T01:21:43.261536897Z" level=info msg="CreateContainer within sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\"" Jan 28 01:21:43.263931 containerd[1563]: time="2026-01-28T01:21:43.262146432Z" level=info msg="StartContainer for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\"" Jan 28 01:21:43.385465 containerd[1563]: time="2026-01-28T01:21:43.385372001Z" level=info msg="StartContainer for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" returns successfully" Jan 28 01:21:43.585542 kubelet[2759]: I0128 01:21:43.584314 2759 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:21:43.737776 kubelet[2759]: I0128 01:21:43.737559 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phqm\" (UniqueName: \"kubernetes.io/projected/db52a4f4-0fa0-4642-a0fc-3b0e595cb2b9-kube-api-access-8phqm\") pod \"coredns-668d6bf9bc-ssc7k\" (UID: \"db52a4f4-0fa0-4642-a0fc-3b0e595cb2b9\") " pod="kube-system/coredns-668d6bf9bc-ssc7k" Jan 28 01:21:43.737776 kubelet[2759]: I0128 01:21:43.737730 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a02a2eae-56e5-4f91-9081-1d9e5a379017-config-volume\") pod \"coredns-668d6bf9bc-zdbcd\" (UID: \"a02a2eae-56e5-4f91-9081-1d9e5a379017\") " pod="kube-system/coredns-668d6bf9bc-zdbcd" Jan 28 01:21:43.737776 kubelet[2759]: I0128 01:21:43.737757 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db52a4f4-0fa0-4642-a0fc-3b0e595cb2b9-config-volume\") pod \"coredns-668d6bf9bc-ssc7k\" (UID: \"db52a4f4-0fa0-4642-a0fc-3b0e595cb2b9\") " pod="kube-system/coredns-668d6bf9bc-ssc7k" Jan 28 01:21:43.737776 kubelet[2759]: I0128 01:21:43.737772 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wndl\" (UniqueName: \"kubernetes.io/projected/a02a2eae-56e5-4f91-9081-1d9e5a379017-kube-api-access-8wndl\") pod \"coredns-668d6bf9bc-zdbcd\" (UID: \"a02a2eae-56e5-4f91-9081-1d9e5a379017\") " pod="kube-system/coredns-668d6bf9bc-zdbcd" Jan 28 01:21:43.955890 kubelet[2759]: E0128 01:21:43.955669 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:43.958788 containerd[1563]: time="2026-01-28T01:21:43.958073308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssc7k,Uid:db52a4f4-0fa0-4642-a0fc-3b0e595cb2b9,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:43.974515 kubelet[2759]: E0128 01:21:43.972040 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:43.974871 containerd[1563]: time="2026-01-28T01:21:43.972483397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zdbcd,Uid:a02a2eae-56e5-4f91-9081-1d9e5a379017,Namespace:kube-system,Attempt:0,}" Jan 28 01:21:44.242313 kubelet[2759]: E0128 01:21:44.241732 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:44.298981 kubelet[2759]: I0128 01:21:44.298775 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c8fpj" podStartSLOduration=7.508165871 podStartE2EDuration="20.298744996s" podCreationTimestamp="2026-01-28 01:21:24 +0000 UTC" firstStartedPulling="2026-01-28 01:21:26.666621137 +0000 UTC m=+6.176391755" lastFinishedPulling="2026-01-28 01:21:39.457200262 +0000 UTC m=+18.966970880" observedRunningTime="2026-01-28 01:21:44.29020541 +0000 UTC m=+23.799976057" watchObservedRunningTime="2026-01-28 01:21:44.298744996 +0000 UTC m=+23.808515614" Jan 28 01:21:45.239742 kubelet[2759]: E0128 01:21:45.239677 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:46.009846 systemd-networkd[1246]: cilium_host: Link UP Jan 28 01:21:46.010124 systemd-networkd[1246]: cilium_net: Link UP Jan 28 01:21:46.011137 systemd-networkd[1246]: cilium_net: Gained carrier Jan 28 01:21:46.011470 systemd-networkd[1246]: cilium_host: Gained carrier Jan 28 01:21:46.011801 systemd-networkd[1246]: cilium_net: Gained IPv6LL Jan 28 01:21:46.012059 systemd-networkd[1246]: cilium_host: Gained IPv6LL Jan 28 01:21:46.188034 systemd-networkd[1246]: cilium_vxlan: Link UP Jan 28 01:21:46.188046 systemd-networkd[1246]: cilium_vxlan: Gained carrier Jan 28 01:21:46.241890 kubelet[2759]: E0128 01:21:46.241567 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:46.497678 kernel: NET: Registered PF_ALG protocol family Jan 28 01:21:47.341630 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL Jan 28 01:21:47.777237 systemd-networkd[1246]: lxc_health: Link UP Jan 28 01:21:47.788295 systemd-networkd[1246]: lxc_health: Gained carrier Jan 28 01:21:48.025793 kubelet[2759]: E0128 01:21:48.024486 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:48.192137 systemd-networkd[1246]: lxc962e81e4b6c1: Link UP Jan 28 01:21:48.202666 kernel: eth0: renamed from tmpc152e Jan 28 01:21:48.216250 systemd-networkd[1246]: lxc962e81e4b6c1: Gained carrier Jan 28 01:21:48.228380 systemd-networkd[1246]: lxccf6a5fc3658a: Link UP Jan 28 01:21:48.243125 kernel: eth0: renamed from tmp83237 Jan 28 01:21:48.251320 kubelet[2759]: E0128 01:21:48.249113 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:48.264892 systemd-networkd[1246]: lxccf6a5fc3658a: Gained carrier Jan 28 01:21:49.252987 kubelet[2759]: E0128 01:21:49.249731 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:49.453245 systemd-networkd[1246]: lxc962e81e4b6c1: Gained IPv6LL Jan 28 01:21:49.518843 systemd-networkd[1246]: lxc_health: Gained IPv6LL Jan 28 01:21:49.837293 systemd-networkd[1246]: lxccf6a5fc3658a: Gained IPv6LL Jan 28 01:21:53.506310 containerd[1563]: time="2026-01-28T01:21:53.504627979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:53.506310 containerd[1563]: time="2026-01-28T01:21:53.504711979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:53.507352 containerd[1563]: time="2026-01-28T01:21:53.505462491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:53.507352 containerd[1563]: time="2026-01-28T01:21:53.505675731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:53.548336 containerd[1563]: time="2026-01-28T01:21:53.547648610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:21:53.548336 containerd[1563]: time="2026-01-28T01:21:53.547730505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:21:53.548336 containerd[1563]: time="2026-01-28T01:21:53.547752510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:53.548680 containerd[1563]: time="2026-01-28T01:21:53.548094008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:21:53.565506 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:21:53.580176 systemd[1]: run-containerd-runc-k8s.io-c152e52ac9fd58f2e308f8f48a14e4791adf19986b1909563981fe5b2d747a5b-runc.dWfiNk.mount: Deactivated successfully. Jan 28 01:21:53.597406 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:21:53.616900 containerd[1563]: time="2026-01-28T01:21:53.616852735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zdbcd,Uid:a02a2eae-56e5-4f91-9081-1d9e5a379017,Namespace:kube-system,Attempt:0,} returns sandbox id \"83237612add9b9fa79386503ed9bac895c278b54fef1e99e06c7e8d7cbc2d032\"" Jan 28 01:21:53.618133 kubelet[2759]: E0128 01:21:53.618102 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:53.625734 containerd[1563]: time="2026-01-28T01:21:53.623872313Z" level=info msg="CreateContainer within sandbox \"83237612add9b9fa79386503ed9bac895c278b54fef1e99e06c7e8d7cbc2d032\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:21:53.650013 containerd[1563]: time="2026-01-28T01:21:53.649979617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssc7k,Uid:db52a4f4-0fa0-4642-a0fc-3b0e595cb2b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c152e52ac9fd58f2e308f8f48a14e4791adf19986b1909563981fe5b2d747a5b\"" Jan 28 01:21:53.651039 kubelet[2759]: E0128 01:21:53.650953 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:53.653183 containerd[1563]: time="2026-01-28T01:21:53.653080216Z" level=info msg="CreateContainer within sandbox \"c152e52ac9fd58f2e308f8f48a14e4791adf19986b1909563981fe5b2d747a5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:21:53.684004 containerd[1563]: time="2026-01-28T01:21:53.683894772Z" level=info msg="CreateContainer within sandbox \"83237612add9b9fa79386503ed9bac895c278b54fef1e99e06c7e8d7cbc2d032\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7703538ce1fa5f4b17dcbf3c1fe59a26ecc2cef2264454271dc84277bff49a37\"" Jan 28 01:21:53.684004 containerd[1563]: time="2026-01-28T01:21:53.684529261Z" level=info msg="StartContainer for \"7703538ce1fa5f4b17dcbf3c1fe59a26ecc2cef2264454271dc84277bff49a37\"" Jan 28 01:21:53.695282 containerd[1563]: time="2026-01-28T01:21:53.693393598Z" level=info msg="CreateContainer within sandbox \"c152e52ac9fd58f2e308f8f48a14e4791adf19986b1909563981fe5b2d747a5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef86f475f438f9033f21b2f39d3bec511d82f0dc28f9a7665a7db1184e1784bb\"" Jan 28 01:21:53.698934 containerd[1563]: time="2026-01-28T01:21:53.696744718Z" level=info msg="StartContainer for \"ef86f475f438f9033f21b2f39d3bec511d82f0dc28f9a7665a7db1184e1784bb\"" Jan 28 01:21:53.840531 containerd[1563]: time="2026-01-28T01:21:53.840354931Z" level=info msg="StartContainer for \"ef86f475f438f9033f21b2f39d3bec511d82f0dc28f9a7665a7db1184e1784bb\" returns successfully" Jan 28 01:21:53.840531 containerd[1563]: time="2026-01-28T01:21:53.840460704Z" level=info msg="StartContainer for \"7703538ce1fa5f4b17dcbf3c1fe59a26ecc2cef2264454271dc84277bff49a37\" returns successfully" Jan 28 01:21:54.307009 kubelet[2759]: E0128 01:21:54.306699 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:54.319758 kubelet[2759]: E0128 01:21:54.318634 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:54.370823 kubelet[2759]: I0128 01:21:54.370381 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ssc7k" podStartSLOduration=30.370357869 podStartE2EDuration="30.370357869s" podCreationTimestamp="2026-01-28 01:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:21:54.338365773 +0000 UTC m=+33.848136391" watchObservedRunningTime="2026-01-28 01:21:54.370357869 +0000 UTC m=+33.880128488" Jan 28 01:21:54.411073 kubelet[2759]: I0128 01:21:54.410983 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zdbcd" podStartSLOduration=30.410965954 podStartE2EDuration="30.410965954s" podCreationTimestamp="2026-01-28 01:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:21:54.41068092 +0000 UTC m=+33.920451538" watchObservedRunningTime="2026-01-28 01:21:54.410965954 +0000 UTC m=+33.920736572" Jan 28 01:21:55.324629 kubelet[2759]: E0128 01:21:55.324461 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:55.325466 kubelet[2759]: E0128 01:21:55.324729 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:56.329810 kubelet[2759]: E0128 01:21:56.329723 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:56.331359 kubelet[2759]: E0128 01:21:56.330549 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:14.744177 kubelet[2759]: E0128 01:22:14.740011 2759 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.316s" Jan 28 01:22:34.797407 kubelet[2759]: E0128 01:22:34.796634 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:39.112414 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:53108.service - OpenSSH per-connection server daemon (10.0.0.1:53108). Jan 28 01:22:39.247045 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 53108 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:22:39.251891 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:22:39.265418 systemd-logind[1539]: New session 10 of user core. Jan 28 01:22:39.280365 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:22:39.795501 kubelet[2759]: E0128 01:22:39.795388 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:39.846967 sshd[4160]: pam_unix(sshd:session): session closed for user core Jan 28 01:22:39.854985 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:53108.service: Deactivated successfully. Jan 28 01:22:39.859862 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:22:39.864872 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:22:39.867101 systemd-logind[1539]: Removed session 10. Jan 28 01:22:44.871010 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:49060.service - OpenSSH per-connection server daemon (10.0.0.1:49060). Jan 28 01:22:44.932509 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 49060 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:22:44.935233 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:22:44.943570 systemd-logind[1539]: New session 11 of user core. Jan 28 01:22:44.948376 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:22:45.189382 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 28 01:22:45.203951 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:49060.service: Deactivated successfully. Jan 28 01:22:45.208005 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:22:45.208307 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:22:45.212339 systemd-logind[1539]: Removed session 11. Jan 28 01:22:50.200965 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:49068.service - OpenSSH per-connection server daemon (10.0.0.1:49068). Jan 28 01:22:50.268381 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 49068 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:22:50.271161 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:22:50.290050 systemd-logind[1539]: New session 12 of user core. Jan 28 01:22:50.299102 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:22:50.584710 sshd[4193]: pam_unix(sshd:session): session closed for user core Jan 28 01:22:50.590860 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:49068.service: Deactivated successfully. Jan 28 01:22:50.601392 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:22:50.606682 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:22:50.609667 systemd-logind[1539]: Removed session 12. Jan 28 01:22:50.799327 kubelet[2759]: E0128 01:22:50.797099 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:55.606024 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:60584.service - OpenSSH per-connection server daemon (10.0.0.1:60584). Jan 28 01:22:55.672180 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 60584 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:22:55.673883 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:22:55.688818 systemd-logind[1539]: New session 13 of user core. Jan 28 01:22:55.701076 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:22:55.948436 sshd[4210]: pam_unix(sshd:session): session closed for user core Jan 28 01:22:55.962021 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:60584.service: Deactivated successfully. Jan 28 01:22:55.965314 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:22:55.965774 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:22:55.968535 systemd-logind[1539]: Removed session 13. Jan 28 01:22:57.797000 kubelet[2759]: E0128 01:22:57.796693 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:00.797897 kubelet[2759]: E0128 01:23:00.795379 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:00.974036 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:60590.service - OpenSSH per-connection server daemon (10.0.0.1:60590). Jan 28 01:23:01.053455 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 60590 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:01.057105 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:01.070660 systemd-logind[1539]: New session 14 of user core. Jan 28 01:23:01.084250 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:23:01.298064 sshd[4228]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:01.304245 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:23:01.304455 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:60590.service: Deactivated successfully. Jan 28 01:23:01.307809 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:23:01.308855 systemd-logind[1539]: Removed session 14. Jan 28 01:23:01.794916 kubelet[2759]: E0128 01:23:01.794654 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:06.326209 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:48416.service - OpenSSH per-connection server daemon (10.0.0.1:48416). Jan 28 01:23:06.388005 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 48416 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:06.390435 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:06.405233 systemd-logind[1539]: New session 15 of user core. Jan 28 01:23:06.423188 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:23:06.663624 sshd[4244]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:06.674999 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:48416.service: Deactivated successfully. Jan 28 01:23:06.686520 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:23:06.687300 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:23:06.691692 systemd-logind[1539]: Removed session 15. Jan 28 01:23:09.808231 kubelet[2759]: E0128 01:23:09.806117 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:11.676665 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:48430.service - OpenSSH per-connection server daemon (10.0.0.1:48430). Jan 28 01:23:11.740315 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 48430 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:11.742725 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:11.757090 systemd-logind[1539]: New session 16 of user core. Jan 28 01:23:11.774001 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:23:11.965201 sshd[4260]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:11.974489 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:48434.service - OpenSSH per-connection server daemon (10.0.0.1:48434). Jan 28 01:23:11.976110 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:48430.service: Deactivated successfully. Jan 28 01:23:11.982879 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:23:11.984717 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:23:11.988268 systemd-logind[1539]: Removed session 16. Jan 28 01:23:12.031512 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 48434 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:12.033250 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:12.043963 systemd-logind[1539]: New session 17 of user core. Jan 28 01:23:12.054206 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:23:12.314547 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:12.326000 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:48438.service - OpenSSH per-connection server daemon (10.0.0.1:48438). Jan 28 01:23:12.326802 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:48434.service: Deactivated successfully. Jan 28 01:23:12.335281 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:23:12.345688 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:23:12.350009 systemd-logind[1539]: Removed session 17. Jan 28 01:23:12.367682 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 48438 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:12.369833 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:12.376995 systemd-logind[1539]: New session 18 of user core. Jan 28 01:23:12.393929 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:23:12.572832 sshd[4286]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:12.580871 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:48438.service: Deactivated successfully. Jan 28 01:23:12.590836 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:23:12.594849 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:23:12.603364 systemd-logind[1539]: Removed session 18. Jan 28 01:23:15.987287 kubelet[2759]: E0128 01:23:15.987229 2759 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.182s" Jan 28 01:23:16.794319 kubelet[2759]: E0128 01:23:16.794083 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:17.604057 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:53454.service - OpenSSH per-connection server daemon (10.0.0.1:53454). Jan 28 01:23:17.659684 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 53454 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:17.662867 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:17.673248 systemd-logind[1539]: New session 19 of user core. Jan 28 01:23:17.688044 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:23:17.937158 sshd[4305]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:17.942338 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:53454.service: Deactivated successfully. Jan 28 01:23:17.949010 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:23:17.952018 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:23:17.954813 systemd-logind[1539]: Removed session 19. Jan 28 01:23:22.954062 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:39416.service - OpenSSH per-connection server daemon (10.0.0.1:39416). Jan 28 01:23:23.038413 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 39416 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:23.042088 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:23.059833 systemd-logind[1539]: New session 20 of user core. Jan 28 01:23:23.071097 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:23:23.247441 sshd[4322]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:23.264381 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:39416.service: Deactivated successfully. Jan 28 01:23:23.270238 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:23:23.271571 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:23:23.274496 systemd-logind[1539]: Removed session 20. Jan 28 01:23:28.270071 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:39426.service - OpenSSH per-connection server daemon (10.0.0.1:39426). Jan 28 01:23:28.335026 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 39426 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:28.338275 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:28.348640 systemd-logind[1539]: New session 21 of user core. Jan 28 01:23:28.356383 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:23:28.549261 sshd[4340]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:28.557644 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:39426.service: Deactivated successfully. Jan 28 01:23:28.565287 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:23:28.567849 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:23:28.572036 systemd-logind[1539]: Removed session 21. Jan 28 01:23:33.570171 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:39010.service - OpenSSH per-connection server daemon (10.0.0.1:39010). Jan 28 01:23:33.615680 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 39010 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:33.619007 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:33.626849 systemd-logind[1539]: New session 22 of user core. Jan 28 01:23:33.637739 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:23:33.848172 sshd[4356]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:33.861472 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:39010.service: Deactivated successfully. Jan 28 01:23:33.864989 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:23:33.865086 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:23:33.867091 systemd-logind[1539]: Removed session 22. Jan 28 01:23:38.881062 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:39016.service - OpenSSH per-connection server daemon (10.0.0.1:39016). Jan 28 01:23:38.930144 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 39016 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:38.935368 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:38.959836 systemd-logind[1539]: New session 23 of user core. Jan 28 01:23:38.973165 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:23:39.168782 sshd[4371]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:39.175882 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:39016.service: Deactivated successfully. Jan 28 01:23:39.179270 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:23:39.179312 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:23:39.182491 systemd-logind[1539]: Removed session 23. Jan 28 01:23:44.188035 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:54826.service - OpenSSH per-connection server daemon (10.0.0.1:54826). Jan 28 01:23:44.239950 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 54826 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:44.242923 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:44.256802 systemd-logind[1539]: New session 24 of user core. Jan 28 01:23:44.263097 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:23:44.433789 sshd[4387]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:44.443066 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:54826.service: Deactivated successfully. Jan 28 01:23:44.449295 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:23:44.461727 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:23:44.464037 systemd-logind[1539]: Removed session 24. Jan 28 01:23:49.452383 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:54840.service - OpenSSH per-connection server daemon (10.0.0.1:54840). Jan 28 01:23:49.524277 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:49.532141 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:49.540482 systemd-logind[1539]: New session 25 of user core. Jan 28 01:23:49.556374 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:23:49.848413 sshd[4403]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:49.866393 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:54840.service: Deactivated successfully. Jan 28 01:23:49.875060 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:23:49.876825 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:23:49.881557 systemd-logind[1539]: Removed session 25. Jan 28 01:23:50.797249 kubelet[2759]: E0128 01:23:50.796363 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:54.877432 systemd[1]: Started sshd@25-10.0.0.106:22-10.0.0.1:56142.service - OpenSSH per-connection server daemon (10.0.0.1:56142). Jan 28 01:23:54.933639 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 56142 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:23:54.939004 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:54.971072 systemd-logind[1539]: New session 26 of user core. Jan 28 01:23:54.979000 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:23:55.176468 sshd[4418]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:55.182253 systemd[1]: sshd@25-10.0.0.106:22-10.0.0.1:56142.service: Deactivated successfully. Jan 28 01:23:55.187895 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:23:55.189215 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:23:55.191800 systemd-logind[1539]: Removed session 26. Jan 28 01:24:00.186403 systemd[1]: Started sshd@26-10.0.0.106:22-10.0.0.1:56144.service - OpenSSH per-connection server daemon (10.0.0.1:56144). Jan 28 01:24:00.223237 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 56144 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:00.226386 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:00.237665 systemd-logind[1539]: New session 27 of user core. Jan 28 01:24:00.245986 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:24:00.418312 sshd[4437]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:00.424712 systemd[1]: sshd@26-10.0.0.106:22-10.0.0.1:56144.service: Deactivated successfully. Jan 28 01:24:00.429951 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:24:00.430220 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:24:00.433987 systemd-logind[1539]: Removed session 27. Jan 28 01:24:02.795074 kubelet[2759]: E0128 01:24:02.794753 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:05.450352 systemd[1]: Started sshd@27-10.0.0.106:22-10.0.0.1:33732.service - OpenSSH per-connection server daemon (10.0.0.1:33732). Jan 28 01:24:05.537394 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 33732 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:05.541413 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:05.561157 systemd-logind[1539]: New session 28 of user core. Jan 28 01:24:05.572732 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:24:05.838386 sshd[4455]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:05.851392 systemd[1]: sshd@27-10.0.0.106:22-10.0.0.1:33732.service: Deactivated successfully. Jan 28 01:24:05.864892 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:24:05.867843 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:24:05.872506 systemd-logind[1539]: Removed session 28. Jan 28 01:24:07.796530 kubelet[2759]: E0128 01:24:07.795565 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:10.800240 kubelet[2759]: E0128 01:24:10.800021 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:10.853020 systemd[1]: Started sshd@28-10.0.0.106:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Jan 28 01:24:10.896096 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:10.898166 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:10.909047 systemd-logind[1539]: New session 29 of user core. Jan 28 01:24:10.918300 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:24:11.092799 sshd[4471]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:11.100213 systemd[1]: sshd@28-10.0.0.106:22-10.0.0.1:33736.service: Deactivated successfully. Jan 28 01:24:11.106528 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:24:11.108153 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:24:11.114487 systemd-logind[1539]: Removed session 29. Jan 28 01:24:16.113290 systemd[1]: Started sshd@29-10.0.0.106:22-10.0.0.1:38468.service - OpenSSH per-connection server daemon (10.0.0.1:38468). Jan 28 01:24:16.184262 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 38468 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:16.188063 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:16.210892 systemd-logind[1539]: New session 30 of user core. Jan 28 01:24:16.220981 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:24:16.417794 sshd[4487]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:16.431979 systemd[1]: Started sshd@30-10.0.0.106:22-10.0.0.1:38480.service - OpenSSH per-connection server daemon (10.0.0.1:38480). Jan 28 01:24:16.433154 systemd[1]: sshd@29-10.0.0.106:22-10.0.0.1:38468.service: Deactivated successfully. Jan 28 01:24:16.436233 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:24:16.438921 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:24:16.445525 systemd-logind[1539]: Removed session 30. Jan 28 01:24:16.476435 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 38480 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:16.479099 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:16.487726 systemd-logind[1539]: New session 31 of user core. Jan 28 01:24:16.499739 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:24:16.885026 sshd[4499]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:16.892989 systemd[1]: Started sshd@31-10.0.0.106:22-10.0.0.1:38492.service - OpenSSH per-connection server daemon (10.0.0.1:38492). Jan 28 01:24:16.893780 systemd[1]: sshd@30-10.0.0.106:22-10.0.0.1:38480.service: Deactivated successfully. Jan 28 01:24:16.897560 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:24:16.898748 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:24:16.902120 systemd-logind[1539]: Removed session 31. Jan 28 01:24:16.945179 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 38492 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:16.948365 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:16.957022 systemd-logind[1539]: New session 32 of user core. Jan 28 01:24:16.966201 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:24:17.622187 sshd[4514]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:17.630895 systemd[1]: Started sshd@32-10.0.0.106:22-10.0.0.1:38506.service - OpenSSH per-connection server daemon (10.0.0.1:38506). Jan 28 01:24:17.631399 systemd[1]: sshd@31-10.0.0.106:22-10.0.0.1:38492.service: Deactivated successfully. Jan 28 01:24:17.641648 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:24:17.643298 systemd-logind[1539]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:24:17.645470 systemd-logind[1539]: Removed session 32. Jan 28 01:24:17.681051 sshd[4536]: Accepted publickey for core from 10.0.0.1 port 38506 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:17.682464 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:17.695097 systemd-logind[1539]: New session 33 of user core. Jan 28 01:24:17.711173 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:24:17.796274 kubelet[2759]: E0128 01:24:17.794345 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:18.113412 sshd[4536]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:18.120937 systemd[1]: Started sshd@33-10.0.0.106:22-10.0.0.1:38508.service - OpenSSH per-connection server daemon (10.0.0.1:38508). Jan 28 01:24:18.121727 systemd[1]: sshd@32-10.0.0.106:22-10.0.0.1:38506.service: Deactivated successfully. Jan 28 01:24:18.126047 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:24:18.127021 systemd-logind[1539]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:24:18.129432 systemd-logind[1539]: Removed session 33. Jan 28 01:24:18.171801 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 38508 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:18.175399 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:18.185495 systemd-logind[1539]: New session 34 of user core. Jan 28 01:24:18.190052 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:24:18.645404 sshd[4550]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:18.660107 systemd[1]: sshd@33-10.0.0.106:22-10.0.0.1:38508.service: Deactivated successfully. Jan 28 01:24:18.664914 systemd-logind[1539]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:24:18.665204 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:24:18.667372 systemd-logind[1539]: Removed session 34. Jan 28 01:24:19.794936 kubelet[2759]: E0128 01:24:19.794812 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:20.805559 kubelet[2759]: E0128 01:24:20.800131 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:23.680219 systemd[1]: Started sshd@34-10.0.0.106:22-10.0.0.1:60196.service - OpenSSH per-connection server daemon (10.0.0.1:60196). Jan 28 01:24:23.799519 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 60196 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:23.805948 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:23.835174 systemd-logind[1539]: New session 35 of user core. Jan 28 01:24:23.856064 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:24:24.233814 sshd[4570]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:24.249513 systemd[1]: sshd@34-10.0.0.106:22-10.0.0.1:60196.service: Deactivated successfully. Jan 28 01:24:24.258052 systemd-logind[1539]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:24:24.266031 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:24:24.273432 systemd-logind[1539]: Removed session 35. Jan 28 01:24:29.276505 systemd[1]: Started sshd@35-10.0.0.106:22-10.0.0.1:60204.service - OpenSSH per-connection server daemon (10.0.0.1:60204). Jan 28 01:24:29.356729 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 60204 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:29.367956 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:29.383548 systemd-logind[1539]: New session 36 of user core. Jan 28 01:24:29.398045 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:24:29.732688 sshd[4587]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:29.765417 systemd[1]: sshd@35-10.0.0.106:22-10.0.0.1:60204.service: Deactivated successfully. Jan 28 01:24:29.769565 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:24:29.778043 systemd-logind[1539]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:24:29.780025 systemd-logind[1539]: Removed session 36. Jan 28 01:24:30.797852 kubelet[2759]: E0128 01:24:30.795952 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:34.753962 systemd[1]: Started sshd@36-10.0.0.106:22-10.0.0.1:50586.service - OpenSSH per-connection server daemon (10.0.0.1:50586). Jan 28 01:24:34.861327 sshd[4602]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:34.860134 sshd[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:34.885863 systemd-logind[1539]: New session 37 of user core. Jan 28 01:24:34.900103 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:24:35.195537 sshd[4602]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:35.219530 systemd[1]: sshd@36-10.0.0.106:22-10.0.0.1:50586.service: Deactivated successfully. Jan 28 01:24:35.223551 systemd-logind[1539]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:24:35.224843 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:24:35.230662 systemd-logind[1539]: Removed session 37. Jan 28 01:24:40.223439 systemd[1]: Started sshd@37-10.0.0.106:22-10.0.0.1:50594.service - OpenSSH per-connection server daemon (10.0.0.1:50594). Jan 28 01:24:40.300801 sshd[4618]: Accepted publickey for core from 10.0.0.1 port 50594 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:40.305257 sshd[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:40.325273 systemd-logind[1539]: New session 38 of user core. Jan 28 01:24:40.334478 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:24:40.647758 sshd[4618]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:40.662018 systemd[1]: sshd@37-10.0.0.106:22-10.0.0.1:50594.service: Deactivated successfully. Jan 28 01:24:40.674241 systemd-logind[1539]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:24:40.677951 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:24:40.679178 systemd-logind[1539]: Removed session 38. Jan 28 01:24:45.684352 systemd[1]: Started sshd@38-10.0.0.106:22-10.0.0.1:34430.service - OpenSSH per-connection server daemon (10.0.0.1:34430). Jan 28 01:24:45.814143 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 34430 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:45.819778 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:45.854208 systemd-logind[1539]: New session 39 of user core. Jan 28 01:24:45.878500 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:24:46.313391 sshd[4634]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:46.319820 systemd[1]: sshd@38-10.0.0.106:22-10.0.0.1:34430.service: Deactivated successfully. Jan 28 01:24:46.335859 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:24:46.336222 systemd-logind[1539]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:24:46.340388 systemd-logind[1539]: Removed session 39. Jan 28 01:24:51.358011 systemd[1]: Started sshd@39-10.0.0.106:22-10.0.0.1:34446.service - OpenSSH per-connection server daemon (10.0.0.1:34446). Jan 28 01:24:51.467037 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 34446 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:51.470138 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:51.487725 systemd-logind[1539]: New session 40 of user core. Jan 28 01:24:51.504162 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:24:51.947468 sshd[4649]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:51.968456 systemd[1]: sshd@39-10.0.0.106:22-10.0.0.1:34446.service: Deactivated successfully. Jan 28 01:24:51.981290 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:24:51.992841 systemd-logind[1539]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:24:51.994907 systemd-logind[1539]: Removed session 40. Jan 28 01:24:52.811155 kubelet[2759]: E0128 01:24:52.808868 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:56.972998 systemd[1]: Started sshd@40-10.0.0.106:22-10.0.0.1:57594.service - OpenSSH per-connection server daemon (10.0.0.1:57594). Jan 28 01:24:57.072802 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 57594 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:24:57.083234 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:57.105371 systemd-logind[1539]: New session 41 of user core. Jan 28 01:24:57.124950 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:24:57.456453 sshd[4666]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:57.470292 systemd[1]: sshd@40-10.0.0.106:22-10.0.0.1:57594.service: Deactivated successfully. Jan 28 01:24:57.481954 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:24:57.483390 systemd-logind[1539]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:24:57.485010 systemd-logind[1539]: Removed session 41. Jan 28 01:25:02.489277 systemd[1]: Started sshd@41-10.0.0.106:22-10.0.0.1:39112.service - OpenSSH per-connection server daemon (10.0.0.1:39112). Jan 28 01:25:02.564274 sshd[4683]: Accepted publickey for core from 10.0.0.1 port 39112 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:02.568171 sshd[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:02.585863 systemd-logind[1539]: New session 42 of user core. Jan 28 01:25:02.612855 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:25:02.953501 sshd[4683]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:02.963158 systemd[1]: sshd@41-10.0.0.106:22-10.0.0.1:39112.service: Deactivated successfully. Jan 28 01:25:02.973508 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:25:02.976531 systemd-logind[1539]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:25:02.980427 systemd-logind[1539]: Removed session 42. Jan 28 01:25:07.989317 systemd[1]: Started sshd@42-10.0.0.106:22-10.0.0.1:39128.service - OpenSSH per-connection server daemon (10.0.0.1:39128). Jan 28 01:25:08.089050 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 39128 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:08.092274 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:08.117345 systemd-logind[1539]: New session 43 of user core. Jan 28 01:25:08.124057 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:25:08.696247 sshd[4698]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:08.706889 systemd[1]: sshd@42-10.0.0.106:22-10.0.0.1:39128.service: Deactivated successfully. Jan 28 01:25:08.734987 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:25:08.748497 systemd-logind[1539]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:25:08.758520 systemd-logind[1539]: Removed session 43. Jan 28 01:25:11.807299 kubelet[2759]: E0128 01:25:11.805085 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:13.721994 systemd[1]: Started sshd@43-10.0.0.106:22-10.0.0.1:36084.service - OpenSSH per-connection server daemon (10.0.0.1:36084). Jan 28 01:25:13.850062 sshd[4717]: Accepted publickey for core from 10.0.0.1 port 36084 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:13.856790 sshd[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:13.874185 systemd-logind[1539]: New session 44 of user core. Jan 28 01:25:13.893050 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:25:14.317961 sshd[4717]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:14.335624 systemd[1]: sshd@43-10.0.0.106:22-10.0.0.1:36084.service: Deactivated successfully. Jan 28 01:25:14.344906 systemd-logind[1539]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:25:14.353262 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:25:14.354217 systemd-logind[1539]: Removed session 44. Jan 28 01:25:18.800740 kubelet[2759]: E0128 01:25:18.796199 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:19.356631 systemd[1]: Started sshd@44-10.0.0.106:22-10.0.0.1:36100.service - OpenSSH per-connection server daemon (10.0.0.1:36100). Jan 28 01:25:19.450320 sshd[4732]: Accepted publickey for core from 10.0.0.1 port 36100 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:19.458424 sshd[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:19.490380 systemd-logind[1539]: New session 45 of user core. Jan 28 01:25:19.510164 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:25:19.800222 kubelet[2759]: E0128 01:25:19.794862 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:19.888258 sshd[4732]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:19.905485 systemd[1]: sshd@44-10.0.0.106:22-10.0.0.1:36100.service: Deactivated successfully. Jan 28 01:25:19.914962 systemd-logind[1539]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:25:19.915811 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:25:19.917475 systemd-logind[1539]: Removed session 45. Jan 28 01:25:24.914640 systemd[1]: Started sshd@45-10.0.0.106:22-10.0.0.1:47440.service - OpenSSH per-connection server daemon (10.0.0.1:47440). Jan 28 01:25:25.034828 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 47440 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:25.036764 sshd[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:25.077619 systemd-logind[1539]: New session 46 of user core. Jan 28 01:25:25.099291 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:25:25.584123 sshd[4749]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:25.595778 systemd-logind[1539]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:25:25.608799 systemd[1]: sshd@45-10.0.0.106:22-10.0.0.1:47440.service: Deactivated successfully. Jan 28 01:25:25.614435 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:25:25.636346 systemd-logind[1539]: Removed session 46. Jan 28 01:25:26.799222 kubelet[2759]: E0128 01:25:26.797434 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:29.796045 kubelet[2759]: E0128 01:25:29.795151 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:30.605377 systemd[1]: Started sshd@46-10.0.0.106:22-10.0.0.1:47448.service - OpenSSH per-connection server daemon (10.0.0.1:47448). Jan 28 01:25:30.688485 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 47448 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:30.693356 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:30.728214 systemd-logind[1539]: New session 47 of user core. Jan 28 01:25:30.738678 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:25:31.044262 sshd[4766]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:31.054726 systemd[1]: sshd@46-10.0.0.106:22-10.0.0.1:47448.service: Deactivated successfully. Jan 28 01:25:31.068091 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:25:31.068411 systemd-logind[1539]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:25:31.080280 systemd-logind[1539]: Removed session 47. Jan 28 01:25:32.798150 kubelet[2759]: E0128 01:25:32.794673 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:35.794979 kubelet[2759]: E0128 01:25:35.793556 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:36.083754 systemd[1]: Started sshd@47-10.0.0.106:22-10.0.0.1:57058.service - OpenSSH per-connection server daemon (10.0.0.1:57058). Jan 28 01:25:36.193927 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 57058 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:36.195866 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:36.223666 systemd-logind[1539]: New session 48 of user core. Jan 28 01:25:36.231071 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 28 01:25:36.536992 sshd[4781]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:36.548380 systemd[1]: sshd@47-10.0.0.106:22-10.0.0.1:57058.service: Deactivated successfully. Jan 28 01:25:36.557193 systemd-logind[1539]: Session 48 logged out. Waiting for processes to exit. Jan 28 01:25:36.567855 systemd[1]: session-48.scope: Deactivated successfully. Jan 28 01:25:36.572185 systemd-logind[1539]: Removed session 48. Jan 28 01:25:41.566277 systemd[1]: Started sshd@48-10.0.0.106:22-10.0.0.1:57066.service - OpenSSH per-connection server daemon (10.0.0.1:57066). Jan 28 01:25:41.651558 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 57066 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:41.660897 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:41.688015 systemd-logind[1539]: New session 49 of user core. Jan 28 01:25:41.698392 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 28 01:25:41.977508 sshd[4796]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:41.999555 systemd[1]: Started sshd@49-10.0.0.106:22-10.0.0.1:57078.service - OpenSSH per-connection server daemon (10.0.0.1:57078). Jan 28 01:25:42.000334 systemd[1]: sshd@48-10.0.0.106:22-10.0.0.1:57066.service: Deactivated successfully. Jan 28 01:25:42.010912 systemd-logind[1539]: Session 49 logged out. Waiting for processes to exit. Jan 28 01:25:42.015351 systemd[1]: session-49.scope: Deactivated successfully. Jan 28 01:25:42.023282 systemd-logind[1539]: Removed session 49. Jan 28 01:25:42.076379 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 57078 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:42.086991 sshd[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:42.104925 systemd-logind[1539]: New session 50 of user core. Jan 28 01:25:42.112121 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 28 01:25:46.283113 containerd[1563]: time="2026-01-28T01:25:46.277356866Z" level=info msg="StopContainer for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" with timeout 30 (s)" Jan 28 01:25:46.283113 containerd[1563]: time="2026-01-28T01:25:46.281106516Z" level=info msg="Stop container \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" with signal terminated" Jan 28 01:25:46.380674 containerd[1563]: time="2026-01-28T01:25:46.380092585Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:25:46.403185 containerd[1563]: time="2026-01-28T01:25:46.403055956Z" level=info msg="StopContainer for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" with timeout 2 (s)" Jan 28 01:25:46.405733 containerd[1563]: time="2026-01-28T01:25:46.403672321Z" level=info msg="Stop container \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" with signal terminated" Jan 28 01:25:46.434554 systemd-networkd[1246]: lxc_health: Link DOWN Jan 28 01:25:46.434570 systemd-networkd[1246]: lxc_health: Lost carrier Jan 28 01:25:46.462293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c-rootfs.mount: Deactivated successfully. Jan 28 01:25:46.492090 containerd[1563]: time="2026-01-28T01:25:46.492002802Z" level=info msg="shim disconnected" id=b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c namespace=k8s.io Jan 28 01:25:46.492090 containerd[1563]: time="2026-01-28T01:25:46.492074046Z" level=warning msg="cleaning up after shim disconnected" id=b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c namespace=k8s.io Jan 28 01:25:46.492428 containerd[1563]: time="2026-01-28T01:25:46.492105126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:46.529062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220-rootfs.mount: Deactivated successfully. Jan 28 01:25:46.543880 containerd[1563]: time="2026-01-28T01:25:46.543719752Z" level=info msg="shim disconnected" id=582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220 namespace=k8s.io Jan 28 01:25:46.544700 containerd[1563]: time="2026-01-28T01:25:46.544196816Z" level=warning msg="cleaning up after shim disconnected" id=582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220 namespace=k8s.io Jan 28 01:25:46.544700 containerd[1563]: time="2026-01-28T01:25:46.544217074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:46.559298 containerd[1563]: time="2026-01-28T01:25:46.559173963Z" level=info msg="StopContainer for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" returns successfully" Jan 28 01:25:46.569525 containerd[1563]: time="2026-01-28T01:25:46.569454221Z" level=info msg="StopPodSandbox for \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\"" Jan 28 01:25:46.569833 containerd[1563]: time="2026-01-28T01:25:46.569532880Z" level=info msg="Container to stop \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:25:46.577532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de-shm.mount: Deactivated successfully. Jan 28 01:25:46.631713 containerd[1563]: time="2026-01-28T01:25:46.631621498Z" level=info msg="StopContainer for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" returns successfully" Jan 28 01:25:46.632442 containerd[1563]: time="2026-01-28T01:25:46.632415647Z" level=info msg="StopPodSandbox for \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\"" Jan 28 01:25:46.639679 containerd[1563]: time="2026-01-28T01:25:46.632563367Z" level=info msg="Container to stop \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:25:46.639679 containerd[1563]: time="2026-01-28T01:25:46.632764639Z" level=info msg="Container to stop \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:25:46.639679 containerd[1563]: time="2026-01-28T01:25:46.632783724Z" level=info msg="Container to stop \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:25:46.639679 containerd[1563]: time="2026-01-28T01:25:46.632802150Z" level=info msg="Container to stop \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:25:46.639679 containerd[1563]: time="2026-01-28T01:25:46.632815524Z" level=info msg="Container to stop \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:25:46.644802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044-shm.mount: Deactivated successfully. Jan 28 01:25:46.717693 containerd[1563]: time="2026-01-28T01:25:46.717417765Z" level=info msg="shim disconnected" id=dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de namespace=k8s.io Jan 28 01:25:46.717693 containerd[1563]: time="2026-01-28T01:25:46.717486375Z" level=warning msg="cleaning up after shim disconnected" id=dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de namespace=k8s.io Jan 28 01:25:46.717693 containerd[1563]: time="2026-01-28T01:25:46.717498829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:46.721134 containerd[1563]: time="2026-01-28T01:25:46.720777881Z" level=info msg="shim disconnected" id=587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044 namespace=k8s.io Jan 28 01:25:46.721134 containerd[1563]: time="2026-01-28T01:25:46.720824940Z" level=warning msg="cleaning up after shim disconnected" id=587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044 namespace=k8s.io Jan 28 01:25:46.721134 containerd[1563]: time="2026-01-28T01:25:46.720833498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:46.764208 containerd[1563]: time="2026-01-28T01:25:46.764073481Z" level=info msg="TearDown network for sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" successfully" Jan 28 01:25:46.764208 containerd[1563]: time="2026-01-28T01:25:46.764126621Z" level=info msg="StopPodSandbox for \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" returns successfully" Jan 28 01:25:46.769256 containerd[1563]: time="2026-01-28T01:25:46.769151401Z" level=info msg="TearDown network for sandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" successfully" Jan 28 01:25:46.769256 containerd[1563]: time="2026-01-28T01:25:46.769207617Z" level=info msg="StopPodSandbox for \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" returns successfully" Jan 28 01:25:46.799913 kubelet[2759]: I0128 01:25:46.795818 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-hubble-tls\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.799913 kubelet[2759]: I0128 01:25:46.795874 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-run\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.799913 kubelet[2759]: I0128 01:25:46.795896 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-xtables-lock\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.799913 kubelet[2759]: I0128 01:25:46.795922 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-config-path\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.799913 kubelet[2759]: I0128 01:25:46.795943 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-kernel\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.799913 kubelet[2759]: I0128 01:25:46.795966 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw6s4\" (UniqueName: \"kubernetes.io/projected/809f5307-df19-4d15-8b0c-5b85589c0b89-kube-api-access-hw6s4\") pod \"809f5307-df19-4d15-8b0c-5b85589c0b89\" (UID: \"809f5307-df19-4d15-8b0c-5b85589c0b89\") " Jan 28 01:25:46.801375 kubelet[2759]: I0128 01:25:46.795990 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-net\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.801375 kubelet[2759]: I0128 01:25:46.796009 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-etc-cni-netd\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.801375 kubelet[2759]: I0128 01:25:46.796028 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cni-path\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.801375 kubelet[2759]: I0128 01:25:46.796051 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-cgroup\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.801375 kubelet[2759]: I0128 01:25:46.796070 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-bpf-maps\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.801375 kubelet[2759]: I0128 01:25:46.796094 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh9pc\" (UniqueName: \"kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-kube-api-access-vh9pc\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.815740 kubelet[2759]: I0128 01:25:46.796116 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-lib-modules\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.815740 kubelet[2759]: I0128 01:25:46.796138 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/809f5307-df19-4d15-8b0c-5b85589c0b89-cilium-config-path\") pod \"809f5307-df19-4d15-8b0c-5b85589c0b89\" (UID: \"809f5307-df19-4d15-8b0c-5b85589c0b89\") " Jan 28 01:25:46.815740 kubelet[2759]: I0128 01:25:46.796157 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-hostproc\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.815740 kubelet[2759]: I0128 01:25:46.796182 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82a64e42-04eb-4eb6-8e7b-6641864556c1-clustermesh-secrets\") pod \"82a64e42-04eb-4eb6-8e7b-6641864556c1\" (UID: \"82a64e42-04eb-4eb6-8e7b-6641864556c1\") " Jan 28 01:25:46.815740 kubelet[2759]: I0128 01:25:46.796837 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.815979 kubelet[2759]: I0128 01:25:46.799034 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.815979 kubelet[2759]: I0128 01:25:46.802033 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.815979 kubelet[2759]: I0128 01:25:46.802060 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.815979 kubelet[2759]: I0128 01:25:46.802084 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.815979 kubelet[2759]: I0128 01:25:46.802110 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.816161 kubelet[2759]: I0128 01:25:46.810054 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:25:46.816161 kubelet[2759]: I0128 01:25:46.810112 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.816161 kubelet[2759]: I0128 01:25:46.810138 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.816161 kubelet[2759]: I0128 01:25:46.811660 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.816161 kubelet[2759]: I0128 01:25:46.811927 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:25:46.826282 kubelet[2759]: I0128 01:25:46.826243 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809f5307-df19-4d15-8b0c-5b85589c0b89-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "809f5307-df19-4d15-8b0c-5b85589c0b89" (UID: "809f5307-df19-4d15-8b0c-5b85589c0b89"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:25:46.831902 kubelet[2759]: I0128 01:25:46.827846 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:25:46.831902 kubelet[2759]: I0128 01:25:46.830937 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a64e42-04eb-4eb6-8e7b-6641864556c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:25:46.833558 kubelet[2759]: I0128 01:25:46.833086 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-kube-api-access-vh9pc" (OuterVolumeSpecName: "kube-api-access-vh9pc") pod "82a64e42-04eb-4eb6-8e7b-6641864556c1" (UID: "82a64e42-04eb-4eb6-8e7b-6641864556c1"). InnerVolumeSpecName "kube-api-access-vh9pc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:25:46.836186 kubelet[2759]: I0128 01:25:46.836001 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809f5307-df19-4d15-8b0c-5b85589c0b89-kube-api-access-hw6s4" (OuterVolumeSpecName: "kube-api-access-hw6s4") pod "809f5307-df19-4d15-8b0c-5b85589c0b89" (UID: "809f5307-df19-4d15-8b0c-5b85589c0b89"). InnerVolumeSpecName "kube-api-access-hw6s4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:25:46.899978 kubelet[2759]: I0128 01:25:46.898805 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903471 2759 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903495 2759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vh9pc\" (UniqueName: \"kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-kube-api-access-vh9pc\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903513 2759 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903526 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/809f5307-df19-4d15-8b0c-5b85589c0b89-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903539 2759 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903552 2759 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82a64e42-04eb-4eb6-8e7b-6641864556c1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903565 2759 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82a64e42-04eb-4eb6-8e7b-6641864556c1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.903737 kubelet[2759]: I0128 01:25:46.903623 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903638 2759 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903651 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82a64e42-04eb-4eb6-8e7b-6641864556c1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903664 2759 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903680 2759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hw6s4\" (UniqueName: \"kubernetes.io/projected/809f5307-df19-4d15-8b0c-5b85589c0b89-kube-api-access-hw6s4\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903692 2759 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903704 2759 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:46.904034 kubelet[2759]: I0128 01:25:46.903716 2759 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82a64e42-04eb-4eb6-8e7b-6641864556c1-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:25:47.243955 kubelet[2759]: I0128 01:25:47.243800 2759 scope.go:117] "RemoveContainer" containerID="b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c" Jan 28 01:25:47.266347 containerd[1563]: time="2026-01-28T01:25:47.264041569Z" level=info msg="RemoveContainer for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\"" Jan 28 01:25:47.317854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de-rootfs.mount: Deactivated successfully. Jan 28 01:25:47.323643 kubelet[2759]: I0128 01:25:47.320668 2759 scope.go:117] "RemoveContainer" containerID="b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c" Jan 28 01:25:47.323715 containerd[1563]: time="2026-01-28T01:25:47.320158878Z" level=info msg="RemoveContainer for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" returns successfully" Jan 28 01:25:47.323715 containerd[1563]: time="2026-01-28T01:25:47.320973048Z" level=error msg="ContainerStatus for \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\": not found" Jan 28 01:25:47.318109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044-rootfs.mount: Deactivated successfully. Jan 28 01:25:47.318300 systemd[1]: var-lib-kubelet-pods-82a64e42\x2d04eb\x2d4eb6\x2d8e7b\x2d6641864556c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvh9pc.mount: Deactivated successfully. Jan 28 01:25:47.320104 systemd[1]: var-lib-kubelet-pods-809f5307\x2ddf19\x2d4d15\x2d8b0c\x2d5b85589c0b89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhw6s4.mount: Deactivated successfully. Jan 28 01:25:47.320304 systemd[1]: var-lib-kubelet-pods-82a64e42\x2d04eb\x2d4eb6\x2d8e7b\x2d6641864556c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 01:25:47.325682 systemd[1]: var-lib-kubelet-pods-82a64e42\x2d04eb\x2d4eb6\x2d8e7b\x2d6641864556c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 01:25:47.330041 kubelet[2759]: E0128 01:25:47.329795 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\": not found" containerID="b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c" Jan 28 01:25:47.330304 kubelet[2759]: I0128 01:25:47.329868 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c"} err="failed to get container status \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8b22f0fd5ec36b05909c9cabe25039008c6dc2287ef267aac315b103c584e7c\": not found" Jan 28 01:25:47.330304 kubelet[2759]: I0128 01:25:47.330093 2759 scope.go:117] "RemoveContainer" containerID="582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220" Jan 28 01:25:47.335563 containerd[1563]: time="2026-01-28T01:25:47.335287527Z" level=info msg="RemoveContainer for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\"" Jan 28 01:25:47.349561 containerd[1563]: time="2026-01-28T01:25:47.349172927Z" level=info msg="RemoveContainer for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" returns successfully" Jan 28 01:25:47.352706 kubelet[2759]: I0128 01:25:47.352019 2759 scope.go:117] "RemoveContainer" containerID="6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa" Jan 28 01:25:47.355535 containerd[1563]: time="2026-01-28T01:25:47.354465457Z" level=info msg="RemoveContainer for \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\"" Jan 28 01:25:47.362341 containerd[1563]: time="2026-01-28T01:25:47.362229038Z" level=info msg="RemoveContainer for \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\" returns successfully" Jan 28 01:25:47.362718 kubelet[2759]: I0128 01:25:47.362628 2759 scope.go:117] "RemoveContainer" containerID="a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07" Jan 28 01:25:47.364275 containerd[1563]: time="2026-01-28T01:25:47.364247582Z" level=info msg="RemoveContainer for \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\"" Jan 28 01:25:47.378948 containerd[1563]: time="2026-01-28T01:25:47.376328895Z" level=info msg="RemoveContainer for \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\" returns successfully" Jan 28 01:25:47.379449 kubelet[2759]: I0128 01:25:47.378280 2759 scope.go:117] "RemoveContainer" containerID="5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838" Jan 28 01:25:47.381372 containerd[1563]: time="2026-01-28T01:25:47.381338302Z" level=info msg="RemoveContainer for \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\"" Jan 28 01:25:47.390665 containerd[1563]: time="2026-01-28T01:25:47.387130397Z" level=info msg="RemoveContainer for \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\" returns successfully" Jan 28 01:25:47.390665 containerd[1563]: time="2026-01-28T01:25:47.388877141Z" level=info msg="RemoveContainer for \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\"" Jan 28 01:25:47.390813 kubelet[2759]: I0128 01:25:47.387346 2759 scope.go:117] "RemoveContainer" containerID="cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322" Jan 28 01:25:47.411182 containerd[1563]: time="2026-01-28T01:25:47.411129554Z" level=info msg="RemoveContainer for \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\" returns successfully" Jan 28 01:25:47.412329 kubelet[2759]: I0128 01:25:47.411905 2759 scope.go:117] "RemoveContainer" containerID="582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220" Jan 28 01:25:47.416212 containerd[1563]: time="2026-01-28T01:25:47.416103211Z" level=error msg="ContainerStatus for \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\": not found" Jan 28 01:25:47.422014 kubelet[2759]: E0128 01:25:47.421744 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\": not found" containerID="582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220" Jan 28 01:25:47.422014 kubelet[2759]: I0128 01:25:47.421795 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220"} err="failed to get container status \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\": rpc error: code = NotFound desc = an error occurred when try to find container \"582a4d28ea876f63123a1b15c834485760ad83e63ff5fad7e02b564a66025220\": not found" Jan 28 01:25:47.422014 kubelet[2759]: I0128 01:25:47.421830 2759 scope.go:117] "RemoveContainer" containerID="6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa" Jan 28 01:25:47.424700 containerd[1563]: time="2026-01-28T01:25:47.422097349Z" level=error msg="ContainerStatus for \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\": not found" Jan 28 01:25:47.425066 kubelet[2759]: E0128 01:25:47.425039 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\": not found" containerID="6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa" Jan 28 01:25:47.425844 kubelet[2759]: I0128 01:25:47.425789 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa"} err="failed to get container status \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"6708747f194f935f75dfc5f9416cc57cd0cb210a6608a8c0446b1551556144fa\": not found" Jan 28 01:25:47.425844 kubelet[2759]: I0128 01:25:47.425842 2759 scope.go:117] "RemoveContainer" containerID="a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07" Jan 28 01:25:47.426178 containerd[1563]: time="2026-01-28T01:25:47.426117058Z" level=error msg="ContainerStatus for \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\": not found" Jan 28 01:25:47.426550 kubelet[2759]: E0128 01:25:47.426325 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\": not found" containerID="a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07" Jan 28 01:25:47.426550 kubelet[2759]: I0128 01:25:47.426491 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07"} err="failed to get container status \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\": rpc error: code = NotFound desc = an error occurred when try to find container \"a70cfbe3d20e0d9cf391e391e1b62da340a47b2ca194c013dee2e05e3bc6ad07\": not found" Jan 28 01:25:47.426550 kubelet[2759]: I0128 01:25:47.426511 2759 scope.go:117] "RemoveContainer" containerID="5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838" Jan 28 01:25:47.427940 containerd[1563]: time="2026-01-28T01:25:47.427853023Z" level=error msg="ContainerStatus for \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\": not found" Jan 28 01:25:47.428275 kubelet[2759]: E0128 01:25:47.428080 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\": not found" containerID="5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838" Jan 28 01:25:47.428345 kubelet[2759]: I0128 01:25:47.428290 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838"} err="failed to get container status \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f82c3e44f6356d983119b427748458e6ce5b15c21a0a2813de69045b8295838\": not found" Jan 28 01:25:47.428383 kubelet[2759]: I0128 01:25:47.428331 2759 scope.go:117] "RemoveContainer" containerID="cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322" Jan 28 01:25:47.428922 containerd[1563]: time="2026-01-28T01:25:47.428853678Z" level=error msg="ContainerStatus for \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\": not found" Jan 28 01:25:47.429264 kubelet[2759]: E0128 01:25:47.429032 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\": not found" containerID="cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322" Jan 28 01:25:47.430215 kubelet[2759]: I0128 01:25:47.430100 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322"} err="failed to get container status \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc753bcee8e871b64ce2afd0188bb8d98eb8a91cef22c3715f7f1c205d684322\": not found" Jan 28 01:25:47.607043 sshd[4808]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:47.634226 systemd[1]: Started sshd@50-10.0.0.106:22-10.0.0.1:48130.service - OpenSSH per-connection server daemon (10.0.0.1:48130). Jan 28 01:25:47.636840 systemd[1]: sshd@49-10.0.0.106:22-10.0.0.1:57078.service: Deactivated successfully. Jan 28 01:25:47.655017 systemd[1]: session-50.scope: Deactivated successfully. Jan 28 01:25:47.657999 systemd-logind[1539]: Session 50 logged out. Waiting for processes to exit. Jan 28 01:25:47.664733 systemd-logind[1539]: Removed session 50. Jan 28 01:25:47.751308 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 48130 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:47.758670 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:47.790017 systemd-logind[1539]: New session 51 of user core. Jan 28 01:25:47.801047 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 28 01:25:48.071532 kubelet[2759]: E0128 01:25:48.067156 2759 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:25:48.804126 kubelet[2759]: I0128 01:25:48.803363 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809f5307-df19-4d15-8b0c-5b85589c0b89" path="/var/lib/kubelet/pods/809f5307-df19-4d15-8b0c-5b85589c0b89/volumes" Jan 28 01:25:48.812322 kubelet[2759]: I0128 01:25:48.811523 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a64e42-04eb-4eb6-8e7b-6641864556c1" path="/var/lib/kubelet/pods/82a64e42-04eb-4eb6-8e7b-6641864556c1/volumes" Jan 28 01:25:49.049731 kubelet[2759]: I0128 01:25:49.049647 2759 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T01:25:49Z","lastTransitionTime":"2026-01-28T01:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 28 01:25:49.617754 sshd[4972]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:49.653856 systemd[1]: Started sshd@51-10.0.0.106:22-10.0.0.1:48136.service - OpenSSH per-connection server daemon (10.0.0.1:48136). Jan 28 01:25:49.656039 systemd[1]: sshd@50-10.0.0.106:22-10.0.0.1:48130.service: Deactivated successfully. Jan 28 01:25:49.668217 systemd[1]: session-51.scope: Deactivated successfully. Jan 28 01:25:49.672899 systemd-logind[1539]: Session 51 logged out. Waiting for processes to exit. Jan 28 01:25:49.688537 systemd-logind[1539]: Removed session 51. Jan 28 01:25:49.716250 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 48136 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:49.720232 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:49.737253 systemd-logind[1539]: New session 52 of user core. Jan 28 01:25:49.755010 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 28 01:25:49.836726 sshd[4986]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:49.863979 systemd[1]: Started sshd@52-10.0.0.106:22-10.0.0.1:48146.service - OpenSSH per-connection server daemon (10.0.0.1:48146). Jan 28 01:25:49.864833 systemd[1]: sshd@51-10.0.0.106:22-10.0.0.1:48136.service: Deactivated successfully. Jan 28 01:25:49.869391 kubelet[2759]: I0128 01:25:49.868411 2759 memory_manager.go:355] "RemoveStaleState removing state" podUID="82a64e42-04eb-4eb6-8e7b-6641864556c1" containerName="cilium-agent" Jan 28 01:25:49.869391 kubelet[2759]: I0128 01:25:49.868448 2759 memory_manager.go:355] "RemoveStaleState removing state" podUID="809f5307-df19-4d15-8b0c-5b85589c0b89" containerName="cilium-operator" Jan 28 01:25:49.881979 kubelet[2759]: I0128 01:25:49.877130 2759 status_manager.go:890] "Failed to get status for pod" podUID="3882e984-ff80-417b-8108-55601edeb229" pod="kube-system/cilium-47wml" err="pods \"cilium-47wml\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 28 01:25:49.881979 kubelet[2759]: I0128 01:25:49.881035 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-etc-cni-netd\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.881979 kubelet[2759]: I0128 01:25:49.881074 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-hostproc\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.881979 kubelet[2759]: I0128 01:25:49.881102 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3882e984-ff80-417b-8108-55601edeb229-hubble-tls\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.881979 kubelet[2759]: I0128 01:25:49.881128 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-cilium-cgroup\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882214 kubelet[2759]: I0128 01:25:49.881153 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3882e984-ff80-417b-8108-55601edeb229-cilium-ipsec-secrets\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882214 kubelet[2759]: I0128 01:25:49.881174 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-bpf-maps\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882214 kubelet[2759]: I0128 01:25:49.881198 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3882e984-ff80-417b-8108-55601edeb229-clustermesh-secrets\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882214 kubelet[2759]: I0128 01:25:49.881259 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-cilium-run\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882214 kubelet[2759]: I0128 01:25:49.881282 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-lib-modules\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882214 kubelet[2759]: I0128 01:25:49.881302 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8t9\" (UniqueName: \"kubernetes.io/projected/3882e984-ff80-417b-8108-55601edeb229-kube-api-access-rb8t9\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882419 kubelet[2759]: I0128 01:25:49.881326 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-cni-path\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882419 kubelet[2759]: I0128 01:25:49.881352 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-host-proc-sys-net\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882419 kubelet[2759]: I0128 01:25:49.881379 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-host-proc-sys-kernel\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882419 kubelet[2759]: I0128 01:25:49.881402 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3882e984-ff80-417b-8108-55601edeb229-xtables-lock\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.882419 kubelet[2759]: I0128 01:25:49.881426 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3882e984-ff80-417b-8108-55601edeb229-cilium-config-path\") pod \"cilium-47wml\" (UID: \"3882e984-ff80-417b-8108-55601edeb229\") " pod="kube-system/cilium-47wml" Jan 28 01:25:49.914891 systemd[1]: session-52.scope: Deactivated successfully. Jan 28 01:25:49.918534 systemd-logind[1539]: Session 52 logged out. Waiting for processes to exit. Jan 28 01:25:49.927743 systemd-logind[1539]: Removed session 52. Jan 28 01:25:49.992719 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 48146 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:25:49.993780 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:50.081743 systemd-logind[1539]: New session 53 of user core. Jan 28 01:25:50.093152 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 28 01:25:50.207553 kubelet[2759]: E0128 01:25:50.204778 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:50.207822 containerd[1563]: time="2026-01-28T01:25:50.207185384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47wml,Uid:3882e984-ff80-417b-8108-55601edeb229,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:50.333968 containerd[1563]: time="2026-01-28T01:25:50.333159873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:50.333968 containerd[1563]: time="2026-01-28T01:25:50.333379418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:50.333968 containerd[1563]: time="2026-01-28T01:25:50.333428191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:50.333968 containerd[1563]: time="2026-01-28T01:25:50.333673155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:50.512754 containerd[1563]: time="2026-01-28T01:25:50.511232042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47wml,Uid:3882e984-ff80-417b-8108-55601edeb229,Namespace:kube-system,Attempt:0,} returns sandbox id \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\"" Jan 28 01:25:50.512963 kubelet[2759]: E0128 01:25:50.512879 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:50.524651 containerd[1563]: time="2026-01-28T01:25:50.524354215Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 01:25:50.569761 containerd[1563]: time="2026-01-28T01:25:50.569683765Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de5db9e4b4303545d2bd4837443852e80823a4acb252c1bcb140b802baa4a301\"" Jan 28 01:25:50.570656 containerd[1563]: time="2026-01-28T01:25:50.570481521Z" level=info msg="StartContainer for \"de5db9e4b4303545d2bd4837443852e80823a4acb252c1bcb140b802baa4a301\"" Jan 28 01:25:50.761330 containerd[1563]: time="2026-01-28T01:25:50.761223573Z" level=info msg="StartContainer for \"de5db9e4b4303545d2bd4837443852e80823a4acb252c1bcb140b802baa4a301\" returns successfully" Jan 28 01:25:50.975218 containerd[1563]: time="2026-01-28T01:25:50.974285950Z" level=info msg="shim disconnected" id=de5db9e4b4303545d2bd4837443852e80823a4acb252c1bcb140b802baa4a301 namespace=k8s.io Jan 28 01:25:50.975218 containerd[1563]: time="2026-01-28T01:25:50.974352106Z" level=warning msg="cleaning up after shim disconnected" id=de5db9e4b4303545d2bd4837443852e80823a4acb252c1bcb140b802baa4a301 namespace=k8s.io Jan 28 01:25:50.975218 containerd[1563]: time="2026-01-28T01:25:50.974367315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:51.350915 kubelet[2759]: E0128 01:25:51.345919 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:51.353040 containerd[1563]: time="2026-01-28T01:25:51.350245833Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 01:25:51.430830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661449562.mount: Deactivated successfully. Jan 28 01:25:51.438216 containerd[1563]: time="2026-01-28T01:25:51.437081057Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9fde6b033f1020bfb810da5aab4339c74b63f4116d48edfade47c460b9eafeef\"" Jan 28 01:25:51.438216 containerd[1563]: time="2026-01-28T01:25:51.437972457Z" level=info msg="StartContainer for \"9fde6b033f1020bfb810da5aab4339c74b63f4116d48edfade47c460b9eafeef\"" Jan 28 01:25:51.631787 containerd[1563]: time="2026-01-28T01:25:51.631448666Z" level=info msg="StartContainer for \"9fde6b033f1020bfb810da5aab4339c74b63f4116d48edfade47c460b9eafeef\" returns successfully" Jan 28 01:25:51.747866 containerd[1563]: time="2026-01-28T01:25:51.747685703Z" level=info msg="shim disconnected" id=9fde6b033f1020bfb810da5aab4339c74b63f4116d48edfade47c460b9eafeef namespace=k8s.io Jan 28 01:25:51.747866 containerd[1563]: time="2026-01-28T01:25:51.747757770Z" level=warning msg="cleaning up after shim disconnected" id=9fde6b033f1020bfb810da5aab4339c74b63f4116d48edfade47c460b9eafeef namespace=k8s.io Jan 28 01:25:51.747866 containerd[1563]: time="2026-01-28T01:25:51.747769852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:52.012700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fde6b033f1020bfb810da5aab4339c74b63f4116d48edfade47c460b9eafeef-rootfs.mount: Deactivated successfully. Jan 28 01:25:52.367327 kubelet[2759]: E0128 01:25:52.365186 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:52.378166 containerd[1563]: time="2026-01-28T01:25:52.377768735Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 01:25:52.474524 containerd[1563]: time="2026-01-28T01:25:52.471124531Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea98d92c86af7e82ed7f534267bad30b54dc476d4eb9053b1323853b976e83ca\"" Jan 28 01:25:52.474524 containerd[1563]: time="2026-01-28T01:25:52.471888739Z" level=info msg="StartContainer for \"ea98d92c86af7e82ed7f534267bad30b54dc476d4eb9053b1323853b976e83ca\"" Jan 28 01:25:52.721689 containerd[1563]: time="2026-01-28T01:25:52.721044139Z" level=info msg="StartContainer for \"ea98d92c86af7e82ed7f534267bad30b54dc476d4eb9053b1323853b976e83ca\" returns successfully" Jan 28 01:25:52.817407 containerd[1563]: time="2026-01-28T01:25:52.816695917Z" level=info msg="shim disconnected" id=ea98d92c86af7e82ed7f534267bad30b54dc476d4eb9053b1323853b976e83ca namespace=k8s.io Jan 28 01:25:52.817407 containerd[1563]: time="2026-01-28T01:25:52.816756632Z" level=warning msg="cleaning up after shim disconnected" id=ea98d92c86af7e82ed7f534267bad30b54dc476d4eb9053b1323853b976e83ca namespace=k8s.io Jan 28 01:25:52.817407 containerd[1563]: time="2026-01-28T01:25:52.816772281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:53.015228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea98d92c86af7e82ed7f534267bad30b54dc476d4eb9053b1323853b976e83ca-rootfs.mount: Deactivated successfully. Jan 28 01:25:53.070446 kubelet[2759]: E0128 01:25:53.070383 2759 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:25:53.388681 kubelet[2759]: E0128 01:25:53.385715 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:53.389223 containerd[1563]: time="2026-01-28T01:25:53.388016591Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 01:25:53.469820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999851038.mount: Deactivated successfully. Jan 28 01:25:53.498485 containerd[1563]: time="2026-01-28T01:25:53.498274593Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e9ad5c11c49307e6c40a2b7bb2aa8c638ba21cfa4cefadcdc506d9adcb49ff9\"" Jan 28 01:25:53.500773 containerd[1563]: time="2026-01-28T01:25:53.499548127Z" level=info msg="StartContainer for \"0e9ad5c11c49307e6c40a2b7bb2aa8c638ba21cfa4cefadcdc506d9adcb49ff9\"" Jan 28 01:25:53.697432 containerd[1563]: time="2026-01-28T01:25:53.697295740Z" level=info msg="StartContainer for \"0e9ad5c11c49307e6c40a2b7bb2aa8c638ba21cfa4cefadcdc506d9adcb49ff9\" returns successfully" Jan 28 01:25:53.749866 containerd[1563]: time="2026-01-28T01:25:53.749425239Z" level=info msg="shim disconnected" id=0e9ad5c11c49307e6c40a2b7bb2aa8c638ba21cfa4cefadcdc506d9adcb49ff9 namespace=k8s.io Jan 28 01:25:53.749866 containerd[1563]: time="2026-01-28T01:25:53.749514529Z" level=warning msg="cleaning up after shim disconnected" id=0e9ad5c11c49307e6c40a2b7bb2aa8c638ba21cfa4cefadcdc506d9adcb49ff9 namespace=k8s.io Jan 28 01:25:53.749866 containerd[1563]: time="2026-01-28T01:25:53.749535077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:54.021438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9ad5c11c49307e6c40a2b7bb2aa8c638ba21cfa4cefadcdc506d9adcb49ff9-rootfs.mount: Deactivated successfully. Jan 28 01:25:54.396638 kubelet[2759]: E0128 01:25:54.395811 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:54.404782 containerd[1563]: time="2026-01-28T01:25:54.404705442Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 01:25:54.481652 containerd[1563]: time="2026-01-28T01:25:54.481473820Z" level=info msg="CreateContainer within sandbox \"f14a59a875ad0109018e25a8f86900ef7bc3e3aa43b155f61c07123e3e0c606f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13788560e805d3cb3a8578cc5113bc109ae02f8536e41cb826c5939ffb83f95c\"" Jan 28 01:25:54.483371 containerd[1563]: time="2026-01-28T01:25:54.483129291Z" level=info msg="StartContainer for \"13788560e805d3cb3a8578cc5113bc109ae02f8536e41cb826c5939ffb83f95c\"" Jan 28 01:25:54.663665 containerd[1563]: time="2026-01-28T01:25:54.662911986Z" level=info msg="StartContainer for \"13788560e805d3cb3a8578cc5113bc109ae02f8536e41cb826c5939ffb83f95c\" returns successfully" Jan 28 01:25:54.795331 kubelet[2759]: E0128 01:25:54.795241 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:55.413248 kubelet[2759]: E0128 01:25:55.413190 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:55.532946 kubelet[2759]: I0128 01:25:55.532265 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-47wml" podStartSLOduration=6.532164627 podStartE2EDuration="6.532164627s" podCreationTimestamp="2026-01-28 01:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:55.527363183 +0000 UTC m=+275.037133821" watchObservedRunningTime="2026-01-28 01:25:55.532164627 +0000 UTC m=+275.041935265" Jan 28 01:25:55.687754 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 28 01:25:56.417058 kubelet[2759]: E0128 01:25:56.416457 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:01.642183 systemd-networkd[1246]: lxc_health: Link UP Jan 28 01:26:01.644311 systemd-networkd[1246]: lxc_health: Gained carrier Jan 28 01:26:02.235117 kubelet[2759]: E0128 01:26:02.233884 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:02.478552 kubelet[2759]: E0128 01:26:02.478462 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:02.898744 systemd-networkd[1246]: lxc_health: Gained IPv6LL Jan 28 01:26:03.467043 kubelet[2759]: E0128 01:26:03.466710 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:14.798318 kubelet[2759]: E0128 01:26:14.796130 2759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:20.809339 containerd[1563]: time="2026-01-28T01:26:20.808923376Z" level=info msg="StopPodSandbox for \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\"" Jan 28 01:26:20.809339 containerd[1563]: time="2026-01-28T01:26:20.809036872Z" level=info msg="TearDown network for sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" successfully" Jan 28 01:26:20.809339 containerd[1563]: time="2026-01-28T01:26:20.809056780Z" level=info msg="StopPodSandbox for \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" returns successfully" Jan 28 01:26:20.811188 containerd[1563]: time="2026-01-28T01:26:20.809676737Z" level=info msg="RemovePodSandbox for \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\"" Jan 28 01:26:20.811188 containerd[1563]: time="2026-01-28T01:26:20.809708407Z" level=info msg="Forcibly stopping sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\"" Jan 28 01:26:20.811188 containerd[1563]: time="2026-01-28T01:26:20.809775104Z" level=info msg="TearDown network for sandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" successfully" Jan 28 01:26:20.829750 containerd[1563]: time="2026-01-28T01:26:20.829322472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:26:20.829750 containerd[1563]: time="2026-01-28T01:26:20.829402545Z" level=info msg="RemovePodSandbox \"587ba8aa59bb127ec06d0a96ba3ec1a351ef3134350af013c1a57cfb538e2044\" returns successfully" Jan 28 01:26:20.832887 containerd[1563]: time="2026-01-28T01:26:20.832764578Z" level=info msg="StopPodSandbox for \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\"" Jan 28 01:26:20.832887 containerd[1563]: time="2026-01-28T01:26:20.832875039Z" level=info msg="TearDown network for sandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" successfully" Jan 28 01:26:20.832887 containerd[1563]: time="2026-01-28T01:26:20.832889235Z" level=info msg="StopPodSandbox for \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" returns successfully" Jan 28 01:26:20.833558 containerd[1563]: time="2026-01-28T01:26:20.833347697Z" level=info msg="RemovePodSandbox for \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\"" Jan 28 01:26:20.833558 containerd[1563]: time="2026-01-28T01:26:20.833393803Z" level=info msg="Forcibly stopping sandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\"" Jan 28 01:26:20.833558 containerd[1563]: time="2026-01-28T01:26:20.833482272Z" level=info msg="TearDown network for sandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" successfully" Jan 28 01:26:20.841954 containerd[1563]: time="2026-01-28T01:26:20.841740681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:26:20.841954 containerd[1563]: time="2026-01-28T01:26:20.841816786Z" level=info msg="RemovePodSandbox \"dd512e44810e05c4fda242ed81a2b54afd67b7ac1f01bb886a6ba73c427d16de\" returns successfully" Jan 28 01:26:23.702637 sshd[4995]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:23.714167 systemd[1]: sshd@52-10.0.0.106:22-10.0.0.1:48146.service: Deactivated successfully. Jan 28 01:26:23.721756 systemd[1]: session-53.scope: Deactivated successfully. Jan 28 01:26:23.724152 systemd-logind[1539]: Session 53 logged out. Waiting for processes to exit. Jan 28 01:26:23.734654 systemd-logind[1539]: Removed session 53.