Jan 23 00:58:02.982329 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 00:58:02.982369 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:58:02.982379 kernel: BIOS-provided physical RAM map: Jan 23 00:58:02.982385 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 23 00:58:02.982392 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 23 00:58:02.982398 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 00:58:02.982408 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 23 00:58:02.982415 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 23 00:58:02.982422 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 00:58:02.982429 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 00:58:02.982435 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 00:58:02.982442 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 00:58:02.982449 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 00:58:02.982456 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 00:58:02.982466 kernel: NX (Execute Disable) protection: active Jan 23 00:58:02.982474 kernel: APIC: Static calls initialized Jan 23 00:58:02.982480 kernel: SMBIOS 2.8 present. Jan 23 00:58:02.982488 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 23 00:58:02.982495 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:58:02.982502 kernel: Hypervisor detected: KVM Jan 23 00:58:02.982512 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 00:58:02.982519 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 00:58:02.982526 kernel: kvm-clock: using sched offset of 7274781690 cycles Jan 23 00:58:02.982534 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:58:02.982541 kernel: tsc: Detected 1999.997 MHz processor Jan 23 00:58:02.982549 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 00:58:02.982557 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 00:58:02.982564 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 23 00:58:02.982572 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 00:58:02.982579 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 00:58:02.982589 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 00:58:02.982596 kernel: Using GB pages for direct mapping Jan 23 00:58:02.982604 kernel: ACPI: Early table checksum verification disabled Jan 23 00:58:02.982611 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 23 00:58:02.982618 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982626 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982633 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982641 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 23 00:58:02.982648 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982658 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982668 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982676 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:02.982684 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 23 00:58:02.982692 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 23 00:58:02.982703 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 23 00:58:02.982711 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 23 00:58:02.982718 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 23 00:58:02.982746 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 23 00:58:02.982756 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 23 00:58:02.982764 kernel: No NUMA configuration found Jan 23 00:58:02.982771 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 00:58:02.982786 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 23 00:58:02.982794 kernel: Zone ranges: Jan 23 00:58:02.982805 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 00:58:02.982813 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 00:58:02.982820 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 00:58:02.982828 kernel: Device empty Jan 23 00:58:02.982836 kernel: Movable zone start for each node Jan 23 00:58:02.982849 kernel: Early memory node ranges Jan 23 00:58:02.982856 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 00:58:02.982868 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 23 00:58:02.982876 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 00:58:02.982884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 00:58:02.982894 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 00:58:02.982902 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 00:58:02.982910 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 23 00:58:02.982918 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 00:58:02.982925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 00:58:02.982933 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 00:58:02.982941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 00:58:02.982949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 00:58:02.982957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 00:58:02.982968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 00:58:02.982975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 00:58:02.982983 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 00:58:02.982991 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 00:58:02.982998 kernel: TSC deadline timer available Jan 23 00:58:02.983006 kernel: CPU topo: Max. logical packages: 1 Jan 23 00:58:02.983028 kernel: CPU topo: Max. logical dies: 1 Jan 23 00:58:02.983036 kernel: CPU topo: Max. dies per package: 1 Jan 23 00:58:02.983043 kernel: CPU topo: Max. threads per core: 1 Jan 23 00:58:02.983053 kernel: CPU topo: Num. cores per package: 2 Jan 23 00:58:02.983060 kernel: CPU topo: Num. threads per package: 2 Jan 23 00:58:02.983067 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 00:58:02.983077 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 00:58:02.983085 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 00:58:02.983093 kernel: kvm-guest: setup PV sched yield Jan 23 00:58:02.983100 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 00:58:02.983108 kernel: Booting paravirtualized kernel on KVM Jan 23 00:58:02.984485 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 00:58:02.984506 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 00:58:02.984515 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 00:58:02.984523 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 00:58:02.984531 kernel: pcpu-alloc: [0] 0 1 Jan 23 00:58:02.984538 kernel: kvm-guest: PV spinlocks enabled Jan 23 00:58:02.984546 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 00:58:02.984554 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:58:02.984562 kernel: random: crng init done Jan 23 00:58:02.984572 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:58:02.984587 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:58:02.984595 kernel: Fallback order for Node 0: 0 Jan 23 00:58:02.984602 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 23 00:58:02.984610 kernel: Policy zone: Normal Jan 23 00:58:02.984617 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:58:02.984624 kernel: software IO TLB: area num 2. Jan 23 00:58:02.984631 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:58:02.984639 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 00:58:02.984649 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 00:58:02.984657 kernel: Dynamic Preempt: voluntary Jan 23 00:58:02.984664 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:58:02.984672 kernel: rcu: RCU event tracing is enabled. Jan 23 00:58:02.984685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:58:02.984693 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:58:02.984700 kernel: Rude variant of Tasks RCU enabled. Jan 23 00:58:02.984708 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:58:02.984715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:58:02.984722 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:58:02.984733 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:58:02.984753 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:58:02.984763 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:58:02.984771 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 00:58:02.984779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:58:02.984786 kernel: Console: colour VGA+ 80x25 Jan 23 00:58:02.984794 kernel: printk: legacy console [tty0] enabled Jan 23 00:58:02.984802 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:58:02.984809 kernel: ACPI: Core revision 20240827 Jan 23 00:58:02.984819 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 00:58:02.984827 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 00:58:02.984835 kernel: x2apic enabled Jan 23 00:58:02.984843 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 00:58:02.984850 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 00:58:02.984858 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 00:58:02.984866 kernel: kvm-guest: setup PV IPIs Jan 23 00:58:02.984875 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 00:58:02.984883 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Jan 23 00:58:02.984891 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Jan 23 00:58:02.984898 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 00:58:02.984906 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 00:58:02.984914 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 00:58:02.984922 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 00:58:02.984929 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 00:58:02.984937 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 00:58:02.984947 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 00:58:02.984955 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 00:58:02.984963 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 00:58:02.984971 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 00:58:02.984980 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 00:58:02.984988 kernel: active return thunk: srso_alias_return_thunk Jan 23 00:58:02.984996 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 00:58:02.985004 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 00:58:02.985086 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 00:58:02.985095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 00:58:02.985306 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 00:58:02.985314 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 00:58:02.985323 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 00:58:02.985331 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 00:58:02.985338 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 23 00:58:02.985347 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 23 00:58:02.985355 kernel: Freeing SMP alternatives memory: 32K Jan 23 00:58:02.985366 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:58:02.985374 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:58:02.985382 kernel: landlock: Up and running. Jan 23 00:58:02.985390 kernel: SELinux: Initializing. Jan 23 00:58:02.985398 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:58:02.985406 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:58:02.985414 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 00:58:02.985422 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 00:58:02.985430 kernel: ... version: 0 Jan 23 00:58:02.985440 kernel: ... bit width: 48 Jan 23 00:58:02.985448 kernel: ... generic registers: 6 Jan 23 00:58:02.985456 kernel: ... value mask: 0000ffffffffffff Jan 23 00:58:02.985464 kernel: ... max period: 00007fffffffffff Jan 23 00:58:02.985472 kernel: ... fixed-purpose events: 0 Jan 23 00:58:02.985480 kernel: ... event mask: 000000000000003f Jan 23 00:58:02.985488 kernel: signal: max sigframe size: 3376 Jan 23 00:58:02.985496 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:58:02.985504 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:58:02.985514 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:58:02.985522 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:58:02.985530 kernel: smpboot: x86: Booting SMP configuration: Jan 23 00:58:02.985538 kernel: .... node #0, CPUs: #1 Jan 23 00:58:02.985546 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:58:02.985554 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Jan 23 00:58:02.985562 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 235488K reserved, 0K cma-reserved) Jan 23 00:58:02.985570 kernel: devtmpfs: initialized Jan 23 00:58:02.985578 kernel: x86/mm: Memory block size: 128MB Jan 23 00:58:02.985590 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:58:02.985598 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:58:02.985606 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:58:02.985614 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:58:02.985622 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:58:02.985630 kernel: audit: type=2000 audit(1769129879.548:1): state=initialized audit_enabled=0 res=1 Jan 23 00:58:02.985638 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:58:02.985646 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 00:58:02.985657 kernel: cpuidle: using governor menu Jan 23 00:58:02.985668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:58:02.985676 kernel: dca service started, version 1.12.1 Jan 23 00:58:02.985684 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 00:58:02.985692 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 00:58:02.985700 kernel: PCI: Using configuration type 1 for base access Jan 23 00:58:02.985708 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 00:58:02.985716 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:58:02.985724 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:58:02.985732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:58:02.985742 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:58:02.985753 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:58:02.985760 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:58:02.985768 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:58:02.985776 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:58:02.985784 kernel: ACPI: Interpreter enabled Jan 23 00:58:02.985792 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 00:58:02.985799 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 00:58:02.985808 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 00:58:02.985818 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 00:58:02.985826 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 00:58:02.985834 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:58:02.986091 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:58:02.986247 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 00:58:02.986384 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 00:58:02.986395 kernel: PCI host bridge to bus 0000:00 Jan 23 00:58:02.986534 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 00:58:02.986655 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 00:58:02.986768 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 00:58:02.986880 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 00:58:02.990841 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 00:58:02.990986 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 23 00:58:02.991179 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:58:02.991350 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:58:02.991495 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 00:58:02.991681 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 00:58:02.991815 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 00:58:02.991945 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 00:58:02.992132 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 00:58:02.992324 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 23 00:58:02.992529 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 23 00:58:02.992657 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 00:58:02.992855 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 00:58:02.993002 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 00:58:02.993284 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 00:58:02.993412 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 00:58:02.993549 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 00:58:02.993677 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 00:58:02.993852 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 00:58:02.994036 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 00:58:02.994177 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 00:58:02.994482 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 23 00:58:02.994611 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 23 00:58:02.994748 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 00:58:02.994874 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 00:58:02.994885 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 00:58:02.994893 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 00:58:02.994901 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 00:58:02.994909 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 00:58:02.994916 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 00:58:02.994925 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 00:58:02.994936 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 00:58:02.994944 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 00:58:02.994952 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 00:58:02.994960 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 00:58:02.994967 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 00:58:02.994975 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 00:58:02.994983 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 00:58:02.994990 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 00:58:02.994998 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 00:58:02.995008 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 00:58:02.998439 kernel: iommu: Default domain type: Translated Jan 23 00:58:02.998452 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 00:58:02.998461 kernel: PCI: Using ACPI for IRQ routing Jan 23 00:58:02.998469 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 00:58:02.998478 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 23 00:58:02.998486 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 23 00:58:02.998631 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 00:58:02.998771 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 00:58:02.998894 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 00:58:02.998905 kernel: vgaarb: loaded Jan 23 00:58:02.998913 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 00:58:02.998922 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 00:58:02.998930 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 00:58:02.998939 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:58:02.998947 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:58:02.998956 kernel: pnp: PnP ACPI init Jan 23 00:58:02.999126 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 00:58:02.999140 kernel: pnp: PnP ACPI: found 5 devices Jan 23 00:58:02.999148 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 00:58:02.999156 kernel: NET: Registered PF_INET protocol family Jan 23 00:58:02.999164 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:58:02.999173 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:58:02.999181 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:58:02.999188 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:58:02.999200 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:58:02.999208 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:58:02.999216 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:58:02.999224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:58:02.999232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:58:02.999240 kernel: NET: Registered PF_XDP protocol family Jan 23 00:58:02.999356 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 00:58:02.999469 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 00:58:02.999581 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 00:58:02.999697 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 00:58:02.999817 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 00:58:02.999931 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 23 00:58:02.999945 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:58:02.999954 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 00:58:02.999962 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 23 00:58:02.999970 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Jan 23 00:58:02.999978 kernel: Initialise system trusted keyrings Jan 23 00:58:02.999989 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:58:02.999997 kernel: Key type asymmetric registered Jan 23 00:58:03.000005 kernel: Asymmetric key parser 'x509' registered Jan 23 00:58:03.003057 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 00:58:03.003074 kernel: io scheduler mq-deadline registered Jan 23 00:58:03.003084 kernel: io scheduler kyber registered Jan 23 00:58:03.003092 kernel: io scheduler bfq registered Jan 23 00:58:03.003100 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 00:58:03.003109 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 00:58:03.003122 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 00:58:03.003130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:58:03.003138 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 00:58:03.003147 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 00:58:03.003155 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 00:58:03.003163 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 00:58:03.003317 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 00:58:03.003331 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 00:58:03.003450 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 00:58:03.003628 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T00:58:02 UTC (1769129882) Jan 23 00:58:03.003786 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 00:58:03.003797 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 00:58:03.003806 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:58:03.003814 kernel: Segment Routing with IPv6 Jan 23 00:58:03.003823 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:58:03.003831 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:58:03.003839 kernel: Key type dns_resolver registered Jan 23 00:58:03.003851 kernel: IPI shorthand broadcast: enabled Jan 23 00:58:03.003859 kernel: sched_clock: Marking stable (3046003953, 356662798)->(3492734819, -90068068) Jan 23 00:58:03.003867 kernel: registered taskstats version 1 Jan 23 00:58:03.003876 kernel: Loading compiled-in X.509 certificates Jan 23 00:58:03.003884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 00:58:03.003892 kernel: Demotion targets for Node 0: null Jan 23 00:58:03.003900 kernel: Key type .fscrypt registered Jan 23 00:58:03.003907 kernel: Key type fscrypt-provisioning registered Jan 23 00:58:03.003915 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:58:03.003925 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:58:03.003933 kernel: ima: No architecture policies found Jan 23 00:58:03.003942 kernel: clk: Disabling unused clocks Jan 23 00:58:03.003949 kernel: Warning: unable to open an initial console. Jan 23 00:58:03.003958 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 00:58:03.003966 kernel: Write protecting the kernel read-only data: 40960k Jan 23 00:58:03.003974 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 00:58:03.003982 kernel: Run /init as init process Jan 23 00:58:03.003989 kernel: with arguments: Jan 23 00:58:03.004001 kernel: /init Jan 23 00:58:03.004009 kernel: with environment: Jan 23 00:58:03.004052 kernel: HOME=/ Jan 23 00:58:03.004063 kernel: TERM=linux Jan 23 00:58:03.004072 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:58:03.004084 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:58:03.004100 systemd[1]: Detected virtualization kvm. Jan 23 00:58:03.004111 systemd[1]: Detected architecture x86-64. Jan 23 00:58:03.004119 systemd[1]: Running in initrd. Jan 23 00:58:03.004127 systemd[1]: No hostname configured, using default hostname. Jan 23 00:58:03.004136 systemd[1]: Hostname set to . Jan 23 00:58:03.004144 systemd[1]: Initializing machine ID from random generator. Jan 23 00:58:03.004152 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:58:03.004161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:58:03.004169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:58:03.004181 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:58:03.004190 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:58:03.004198 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:58:03.004207 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:58:03.004216 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:58:03.004225 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:58:03.004233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:58:03.004244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:58:03.004253 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:58:03.004261 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:58:03.004270 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:58:03.004278 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:58:03.004286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:58:03.004295 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:58:03.004303 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:58:03.004311 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:58:03.004322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:58:03.004330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:58:03.004343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:58:03.004351 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:58:03.004360 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:58:03.004371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:58:03.004380 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:58:03.004388 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:58:03.004397 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:58:03.004405 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:58:03.004413 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:58:03.004421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:03.004459 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 00:58:03.004484 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:58:03.004496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:58:03.004504 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:58:03.004514 systemd-journald[187]: Journal started Jan 23 00:58:03.004532 systemd-journald[187]: Runtime Journal (/run/log/journal/b18b6e764c4d43f0bd171f1b1431290f) is 8M, max 78.2M, 70.2M free. Jan 23 00:58:02.973961 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 00:58:03.045336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:58:03.045366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:58:03.045380 kernel: Bridge firewalling registered Jan 23 00:58:03.019455 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 00:58:03.141048 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:58:03.141535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:58:03.142841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:03.144669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:58:03.150287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:58:03.152958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:58:03.157159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:58:03.165377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:58:03.185735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:58:03.186979 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:58:03.187258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:03.196396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:58:03.198682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:58:03.201240 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:58:03.205844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:58:03.226639 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:58:03.257187 systemd-resolved[225]: Positive Trust Anchors: Jan 23 00:58:03.258324 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:58:03.258354 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:58:03.262779 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 23 00:58:03.267070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:58:03.268385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:58:03.335072 kernel: SCSI subsystem initialized Jan 23 00:58:03.345052 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:58:03.358054 kernel: iscsi: registered transport (tcp) Jan 23 00:58:03.380888 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:58:03.380958 kernel: QLogic iSCSI HBA Driver Jan 23 00:58:03.407559 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:58:03.427294 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:58:03.431196 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:58:03.502475 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:58:03.505859 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:58:03.559057 kernel: raid6: avx2x4 gen() 26148 MB/s Jan 23 00:58:03.577045 kernel: raid6: avx2x2 gen() 24632 MB/s Jan 23 00:58:03.595117 kernel: raid6: avx2x1 gen() 17545 MB/s Jan 23 00:58:03.595137 kernel: raid6: using algorithm avx2x4 gen() 26148 MB/s Jan 23 00:58:03.615297 kernel: raid6: .... xor() 3251 MB/s, rmw enabled Jan 23 00:58:03.615322 kernel: raid6: using avx2x2 recovery algorithm Jan 23 00:58:03.637042 kernel: xor: automatically using best checksumming function avx Jan 23 00:58:03.781069 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:58:03.789224 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:58:03.791652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:58:03.822379 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jan 23 00:58:03.828282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:58:03.831992 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:58:03.860329 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 23 00:58:03.887778 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:58:03.890669 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:58:03.974144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:58:03.978425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:58:04.052035 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 00:58:04.052078 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 23 00:58:04.062008 kernel: scsi host0: Virtio SCSI HBA Jan 23 00:58:04.067030 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 00:58:04.074536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:58:04.201871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:04.206250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:04.214363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:04.217977 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:58:04.266144 kernel: libata version 3.00 loaded. Jan 23 00:58:04.279508 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 00:58:04.283106 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 23 00:58:04.283374 kernel: AES CTR mode by8 optimization enabled Jan 23 00:58:04.283387 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 00:58:04.283587 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 00:58:04.284510 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 00:58:04.295042 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:58:04.295068 kernel: GPT:9289727 != 167739391 Jan 23 00:58:04.295080 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:58:04.295095 kernel: GPT:9289727 != 167739391 Jan 23 00:58:04.295105 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:58:04.295114 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:58:04.295124 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 00:58:04.297793 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 00:58:04.297985 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 00:58:04.300058 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 00:58:04.300234 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 00:58:04.300386 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 00:58:04.305925 kernel: scsi host1: ahci Jan 23 00:58:04.311278 kernel: scsi host2: ahci Jan 23 00:58:04.313671 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 00:58:04.313695 kernel: scsi host3: ahci Jan 23 00:58:04.323678 kernel: scsi host4: ahci Jan 23 00:58:04.326037 kernel: scsi host5: ahci Jan 23 00:58:04.329146 kernel: scsi host6: ahci Jan 23 00:58:04.329340 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Jan 23 00:58:04.329354 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Jan 23 00:58:04.329364 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Jan 23 00:58:04.329373 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Jan 23 00:58:04.329383 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Jan 23 00:58:04.329392 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Jan 23 00:58:04.399369 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 00:58:04.520708 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:04.530932 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 00:58:04.538348 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 00:58:04.539158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 00:58:04.550216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 00:58:04.552792 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:58:04.575135 disk-uuid[606]: Primary Header is updated. Jan 23 00:58:04.575135 disk-uuid[606]: Secondary Entries is updated. Jan 23 00:58:04.575135 disk-uuid[606]: Secondary Header is updated. Jan 23 00:58:04.584702 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:58:04.595073 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:58:04.642396 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:04.642478 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:04.648393 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:04.648419 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:04.652626 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:04.652649 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:04.750867 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:58:04.773643 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:58:04.775506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:58:04.776411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:58:04.780122 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:58:04.806314 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:58:05.604045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:58:05.604467 disk-uuid[607]: The operation has completed successfully. Jan 23 00:58:05.663918 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:58:05.664079 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:58:05.689453 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:58:05.701756 sh[634]: Success Jan 23 00:58:05.727125 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:58:05.727160 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:58:05.728069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:58:05.744080 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 00:58:05.799923 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:58:05.801546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:58:05.824344 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:58:05.836103 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (646) Jan 23 00:58:05.840040 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 00:58:05.840092 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:05.855611 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 00:58:05.855669 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:58:05.855683 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:58:05.859475 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:58:05.860903 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:58:05.862051 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:58:05.864108 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:58:05.867126 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:58:05.906236 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (680) Jan 23 00:58:05.913048 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:05.913094 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:05.921537 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:58:05.921585 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:58:05.924340 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:58:05.936094 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:05.937582 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:58:05.942145 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:58:06.044123 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:58:06.056195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:58:06.061191 ignition[740]: Ignition 2.22.0 Jan 23 00:58:06.061213 ignition[740]: Stage: fetch-offline Jan 23 00:58:06.064286 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:58:06.061262 ignition[740]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:06.061274 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:06.061359 ignition[740]: parsed url from cmdline: "" Jan 23 00:58:06.061364 ignition[740]: no config URL provided Jan 23 00:58:06.061369 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:58:06.061378 ignition[740]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:58:06.061384 ignition[740]: failed to fetch config: resource requires networking Jan 23 00:58:06.061550 ignition[740]: Ignition finished successfully Jan 23 00:58:06.102868 systemd-networkd[820]: lo: Link UP Jan 23 00:58:06.102883 systemd-networkd[820]: lo: Gained carrier Jan 23 00:58:06.105241 systemd-networkd[820]: Enumeration completed Jan 23 00:58:06.105365 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:58:06.106673 systemd[1]: Reached target network.target - Network. Jan 23 00:58:06.106959 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:58:06.106965 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:58:06.109814 systemd-networkd[820]: eth0: Link UP Jan 23 00:58:06.110067 systemd-networkd[820]: eth0: Gained carrier Jan 23 00:58:06.110082 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:58:06.113475 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:58:06.152703 ignition[824]: Ignition 2.22.0 Jan 23 00:58:06.152754 ignition[824]: Stage: fetch Jan 23 00:58:06.152906 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:06.152919 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:06.153010 ignition[824]: parsed url from cmdline: "" Jan 23 00:58:06.153043 ignition[824]: no config URL provided Jan 23 00:58:06.153051 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:58:06.153063 ignition[824]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:58:06.153093 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 23 00:58:06.153278 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 00:58:06.353461 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 23 00:58:06.353934 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 00:58:06.755048 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 23 00:58:06.755221 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 00:58:06.838092 systemd-networkd[820]: eth0: DHCPv4 address 172.236.108.127/24, gateway 172.236.108.1 acquired from 23.33.176.71 Jan 23 00:58:07.555451 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 23 00:58:07.651567 ignition[824]: PUT result: OK Jan 23 00:58:07.651647 ignition[824]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 23 00:58:07.749372 systemd-networkd[820]: eth0: Gained IPv6LL Jan 23 00:58:07.763728 ignition[824]: GET result: OK Jan 23 00:58:07.763867 ignition[824]: parsing config with SHA512: a321f4e3a975cc86a0dfdfd0781554b54168b9398a48bfcf4ff0e1822941097469444f8ab0d76dde860312229946b0e898a2ec473330350bba7885d113727dd2 Jan 23 00:58:07.770432 unknown[824]: fetched base config from "system" Jan 23 00:58:07.770779 ignition[824]: fetch: fetch complete Jan 23 00:58:07.770443 unknown[824]: fetched base config from "system" Jan 23 00:58:07.770785 ignition[824]: fetch: fetch passed Jan 23 00:58:07.770449 unknown[824]: fetched user config from "akamai" Jan 23 00:58:07.770830 ignition[824]: Ignition finished successfully Jan 23 00:58:07.776053 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:58:07.790143 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:58:07.823222 ignition[832]: Ignition 2.22.0 Jan 23 00:58:07.823236 ignition[832]: Stage: kargs Jan 23 00:58:07.823366 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:07.823376 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:07.824064 ignition[832]: kargs: kargs passed Jan 23 00:58:07.826843 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:58:07.824102 ignition[832]: Ignition finished successfully Jan 23 00:58:07.829914 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:58:07.862460 ignition[839]: Ignition 2.22.0 Jan 23 00:58:07.862474 ignition[839]: Stage: disks Jan 23 00:58:07.862582 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:07.862592 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:07.865599 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:58:07.863670 ignition[839]: disks: disks passed Jan 23 00:58:07.867072 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:58:07.863708 ignition[839]: Ignition finished successfully Jan 23 00:58:07.868397 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:58:07.870006 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:58:07.871622 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:58:07.876286 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:58:07.878703 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:58:07.915184 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 00:58:07.918222 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:58:07.922106 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:58:08.036032 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 00:58:08.036800 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:58:08.038105 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:58:08.040498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:58:08.042635 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:58:08.046044 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:58:08.046096 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:58:08.046122 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:58:08.054979 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:58:08.057896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:58:08.065555 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Jan 23 00:58:08.065586 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:08.069086 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:08.082184 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:58:08.082268 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:58:08.082281 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:58:08.087908 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:58:08.127601 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:58:08.134729 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:58:08.139671 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:58:08.143935 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:58:08.234063 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:58:08.236427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:58:08.237776 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:58:08.255949 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:58:08.260447 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:08.280694 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:58:08.299402 ignition[968]: INFO : Ignition 2.22.0 Jan 23 00:58:08.301099 ignition[968]: INFO : Stage: mount Jan 23 00:58:08.301099 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:08.301099 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:08.306085 ignition[968]: INFO : mount: mount passed Jan 23 00:58:08.306085 ignition[968]: INFO : Ignition finished successfully Jan 23 00:58:08.307472 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:58:08.309989 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:58:09.038628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:58:09.064042 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (981) Jan 23 00:58:09.064077 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:09.068062 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:09.078631 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:58:09.078660 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:58:09.078675 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:58:09.083205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:58:09.129118 ignition[997]: INFO : Ignition 2.22.0 Jan 23 00:58:09.129118 ignition[997]: INFO : Stage: files Jan 23 00:58:09.131383 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:09.131383 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:09.131383 ignition[997]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:58:09.135139 ignition[997]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:58:09.135139 ignition[997]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:58:09.137407 ignition[997]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:58:09.137407 ignition[997]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:58:09.140070 ignition[997]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:58:09.138619 unknown[997]: wrote ssh authorized keys file for user: core Jan 23 00:58:09.142594 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:58:09.142594 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 00:58:09.461198 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:58:09.538947 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:58:09.538947 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:58:09.541776 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 00:58:09.750975 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 00:58:09.934607 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:58:09.934607 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:58:09.937203 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:58:09.969415 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:58:09.969415 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:58:09.969415 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:58:09.969415 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 00:58:10.366045 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 00:58:10.761402 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:58:10.761402 ignition[997]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 00:58:10.764739 ignition[997]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:58:10.765985 ignition[997]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:58:10.765985 ignition[997]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:58:10.765985 ignition[997]: INFO : files: files passed Jan 23 00:58:10.782993 ignition[997]: INFO : Ignition finished successfully Jan 23 00:58:10.769499 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:58:10.774135 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:58:10.780229 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:58:10.787933 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:58:10.788127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:58:10.806665 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:58:10.806665 initrd-setup-root-after-ignition[1028]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:58:10.809266 initrd-setup-root-after-ignition[1032]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:58:10.810823 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:58:10.813140 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:58:10.814923 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:58:10.867664 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:58:10.868723 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:58:10.870341 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:58:10.871482 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:58:10.873227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:58:10.874103 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:58:10.905501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:58:10.908089 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:58:10.927538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:58:10.928451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:58:10.930247 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:58:10.932138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:58:10.932315 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:58:10.934154 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:58:10.935467 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:58:10.937261 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:58:10.938991 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:58:10.940661 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:58:10.942467 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:58:10.944443 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:58:10.946219 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:58:10.948131 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:58:10.949743 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:58:10.951676 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:58:10.953388 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:58:10.953580 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:58:10.955427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:58:10.956697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:58:10.958117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:58:10.958809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:58:10.959989 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:58:10.960111 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:58:10.962396 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:58:10.962563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:58:10.963610 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:58:10.963746 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:58:10.967111 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:58:10.968579 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:58:10.968691 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:58:10.974159 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:58:10.975187 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:58:11.005331 ignition[1052]: INFO : Ignition 2.22.0 Jan 23 00:58:11.005331 ignition[1052]: INFO : Stage: umount Jan 23 00:58:11.005331 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:11.005331 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 00:58:10.975338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:58:11.017179 ignition[1052]: INFO : umount: umount passed Jan 23 00:58:11.017179 ignition[1052]: INFO : Ignition finished successfully Jan 23 00:58:10.976815 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:58:10.976951 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:58:10.982776 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:58:10.985415 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:58:11.009109 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:58:11.009255 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:58:11.012279 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:58:11.012336 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:58:11.014146 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:58:11.014195 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:58:11.014927 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:58:11.014975 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:58:11.017892 systemd[1]: Stopped target network.target - Network. Jan 23 00:58:11.019235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:58:11.019288 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:58:11.021460 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:58:11.022569 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:58:11.023119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:58:11.026211 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:58:11.028310 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:58:11.030668 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:58:11.030737 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:58:11.032567 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:58:11.032620 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:58:11.034770 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:58:11.034841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:58:11.036413 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:58:11.036464 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:58:11.038126 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:58:11.039865 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:58:11.042706 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:58:11.043694 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:58:11.043798 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:58:11.045887 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:58:11.046001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:58:11.049363 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:58:11.049643 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:58:11.049766 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:58:11.051673 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:58:11.053569 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:58:11.054728 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:58:11.054772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:58:11.056205 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:58:11.056259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:58:11.058390 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:58:11.059895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:58:11.059949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:58:11.061704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:58:11.061752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:11.065442 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:58:11.065491 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:58:11.067333 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:58:11.067382 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:58:11.069119 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:58:11.075113 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:58:11.075180 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:58:11.084174 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:58:11.085428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:58:11.090584 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:58:11.090757 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:58:11.093034 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:58:11.093480 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:58:11.095065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:58:11.095117 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:58:11.096642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:58:11.096693 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:58:11.098904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:58:11.098956 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:58:11.100547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:58:11.100600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:58:11.102657 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:58:11.105075 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:58:11.105131 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:58:11.108150 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:58:11.108200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:58:11.111114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:58:11.111166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:11.116306 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:58:11.116365 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:58:11.116414 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:58:11.121867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:58:11.121978 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:58:11.123174 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:58:11.125126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:58:11.151530 systemd[1]: Switching root. Jan 23 00:58:11.177653 systemd-journald[187]: Journal stopped Jan 23 00:58:12.457996 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 00:58:12.458080 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:58:12.458094 kernel: SELinux: policy capability open_perms=1 Jan 23 00:58:12.458104 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:58:12.458113 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:58:12.458138 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:58:12.458149 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:58:12.458158 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:58:12.458167 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:58:12.459044 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:58:12.459058 kernel: audit: type=1403 audit(1769129891.352:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:58:12.459070 systemd[1]: Successfully loaded SELinux policy in 78.187ms. Jan 23 00:58:12.459085 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.779ms. Jan 23 00:58:12.459096 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:58:12.459107 systemd[1]: Detected virtualization kvm. Jan 23 00:58:12.459117 systemd[1]: Detected architecture x86-64. Jan 23 00:58:12.459130 systemd[1]: Detected first boot. Jan 23 00:58:12.459140 systemd[1]: Initializing machine ID from random generator. Jan 23 00:58:12.459150 zram_generator::config[1097]: No configuration found. Jan 23 00:58:12.459161 kernel: Guest personality initialized and is inactive Jan 23 00:58:12.459170 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 00:58:12.459180 kernel: Initialized host personality Jan 23 00:58:12.459189 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:58:12.459199 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:58:12.459213 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:58:12.459224 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:58:12.459234 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:58:12.459244 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:58:12.459254 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:58:12.459264 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:58:12.459274 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:58:12.459287 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:58:12.459297 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:58:12.459307 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:58:12.459318 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:58:12.459329 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:58:12.459339 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:58:12.459349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:58:12.459359 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:58:12.459372 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:58:12.459570 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:58:12.459581 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:58:12.459592 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:58:12.459602 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:58:12.459612 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:58:12.459623 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:58:12.459636 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:58:12.459646 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:58:12.459657 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:58:12.459669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:58:12.459679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:58:12.459689 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:58:12.459700 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:58:12.459710 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:58:12.459720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:58:12.459733 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:58:12.459743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:58:12.459754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:58:12.459764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:58:12.459777 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:58:12.459787 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:58:12.459797 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:58:12.459807 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:58:12.459818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:12.459828 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:58:12.459838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:58:12.459849 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:58:12.459862 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:58:12.459872 systemd[1]: Reached target machines.target - Containers. Jan 23 00:58:12.459882 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:58:12.459893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:58:12.459904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:58:12.459914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:58:12.459924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:58:12.459934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:58:12.459945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:58:12.459957 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:58:12.459968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:58:12.459978 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:58:12.459988 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:58:12.459999 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:58:12.460009 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:58:12.461070 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:58:12.461084 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:58:12.461099 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:58:12.461109 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:58:12.461120 kernel: ACPI: bus type drm_connector registered Jan 23 00:58:12.461130 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:58:12.461140 kernel: fuse: init (API version 7.41) Jan 23 00:58:12.461150 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:58:12.461160 kernel: loop: module loaded Jan 23 00:58:12.461170 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:58:12.461183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:58:12.461193 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:58:12.461204 systemd[1]: Stopped verity-setup.service. Jan 23 00:58:12.461216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:12.461226 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:58:12.461236 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:58:12.461247 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:58:12.461257 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:58:12.461268 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:58:12.461280 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:58:12.461291 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:58:12.461301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:58:12.461334 systemd-journald[1178]: Collecting audit messages is disabled. Jan 23 00:58:12.461357 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:58:12.461368 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:58:12.461379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:58:12.461389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:58:12.461400 systemd-journald[1178]: Journal started Jan 23 00:58:12.461419 systemd-journald[1178]: Runtime Journal (/run/log/journal/3a1ad501ee8b43488e30c3b25d61a15e) is 8M, max 78.2M, 70.2M free. Jan 23 00:58:12.026429 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:58:12.049061 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 00:58:12.049910 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:58:12.465040 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:58:12.468675 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:58:12.468885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:58:12.470251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:58:12.470464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:58:12.471609 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:58:12.471882 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:58:12.477910 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:58:12.478144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:58:12.479464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:58:12.480771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:58:12.481937 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:58:12.483603 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:58:12.497838 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:58:12.502149 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:58:12.504203 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:58:12.507076 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:58:12.507106 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:58:12.508853 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:58:12.511868 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:58:12.513034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:58:12.522120 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:58:12.527729 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:58:12.528679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:58:12.531223 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:58:12.531991 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:58:12.536579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:58:12.542527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:58:12.546358 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:58:12.552752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:58:12.554592 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:58:12.570105 systemd-journald[1178]: Time spent on flushing to /var/log/journal/3a1ad501ee8b43488e30c3b25d61a15e is 59.624ms for 1011 entries. Jan 23 00:58:12.570105 systemd-journald[1178]: System Journal (/var/log/journal/3a1ad501ee8b43488e30c3b25d61a15e) is 8M, max 195.6M, 187.6M free. Jan 23 00:58:12.646857 systemd-journald[1178]: Received client request to flush runtime journal. Jan 23 00:58:12.646901 kernel: loop0: detected capacity change from 0 to 8 Jan 23 00:58:12.646927 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:58:12.584731 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:58:12.586680 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:58:12.590941 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:58:12.620503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:58:12.648249 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:58:12.658131 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:58:12.662361 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:58:12.663546 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 00:58:12.666929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:12.676331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:58:12.707339 kernel: loop2: detected capacity change from 0 to 128560 Jan 23 00:58:12.755711 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 00:58:12.755895 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 23 00:58:12.755918 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 23 00:58:12.775559 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:58:12.800050 kernel: loop4: detected capacity change from 0 to 8 Jan 23 00:58:12.808049 kernel: loop5: detected capacity change from 0 to 219144 Jan 23 00:58:12.837051 kernel: loop6: detected capacity change from 0 to 128560 Jan 23 00:58:12.855046 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 00:58:12.876071 (sd-merge)[1248]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 23 00:58:12.876747 (sd-merge)[1248]: Merged extensions into '/usr'. Jan 23 00:58:12.884431 systemd[1]: Reload requested from client PID 1220 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:58:12.884532 systemd[1]: Reloading... Jan 23 00:58:13.007050 zram_generator::config[1273]: No configuration found. Jan 23 00:58:13.112873 ldconfig[1215]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:58:13.236884 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:58:13.237109 systemd[1]: Reloading finished in 350 ms. Jan 23 00:58:13.269110 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:58:13.270314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:58:13.271393 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:58:13.289977 systemd[1]: Starting ensure-sysext.service... Jan 23 00:58:13.294132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:58:13.302697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:58:13.318756 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:58:13.318777 systemd[1]: Reloading... Jan 23 00:58:13.319617 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:58:13.321278 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:58:13.321704 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:58:13.322082 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:58:13.323099 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:58:13.323470 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 23 00:58:13.323616 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 23 00:58:13.334288 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:58:13.335340 systemd-tmpfiles[1319]: Skipping /boot Jan 23 00:58:13.362589 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:58:13.362602 systemd-tmpfiles[1319]: Skipping /boot Jan 23 00:58:13.379481 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jan 23 00:58:13.411042 zram_generator::config[1352]: No configuration found. Jan 23 00:58:13.654052 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:58:13.704045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 00:58:13.709127 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:58:13.710035 systemd[1]: Reloading finished in 390 ms. Jan 23 00:58:13.720466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:58:13.729127 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:58:13.739102 kernel: ACPI: button: Power Button [PWRF] Jan 23 00:58:13.744443 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:58:13.748893 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 00:58:13.749230 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 00:58:13.755237 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:58:13.758407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:58:13.766203 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:58:13.772841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:58:13.776228 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:58:13.801137 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:58:13.837277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:13.837451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:58:13.840091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:58:13.848952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:58:13.854322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:58:13.855746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:58:13.855841 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:58:13.855918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:13.862555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:13.863098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:58:13.863330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:58:13.863451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:58:13.863573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:13.868377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:13.868617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:58:13.882315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:58:13.883577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:58:13.883744 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:58:13.883898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:58:13.886710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:58:13.888475 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:58:13.901538 systemd[1]: Finished ensure-sysext.service. Jan 23 00:58:13.911473 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:58:13.917209 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 00:58:13.922144 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:58:13.923096 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:58:13.923231 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:58:13.930363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:58:13.931196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:58:13.957441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 00:58:13.963776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:58:13.967482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:58:13.967703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:58:13.968915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:58:13.980456 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:58:13.982234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:58:13.984500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:58:13.992183 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:58:13.996821 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:58:13.997097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:58:14.006770 augenrules[1485]: No rules Jan 23 00:58:14.009765 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:58:14.010611 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:58:14.031772 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:58:14.060054 kernel: EDAC MC: Ver: 3.0.0 Jan 23 00:58:14.114227 systemd-networkd[1427]: lo: Link UP Jan 23 00:58:14.114526 systemd-networkd[1427]: lo: Gained carrier Jan 23 00:58:14.120660 systemd-networkd[1427]: Enumeration completed Jan 23 00:58:14.120744 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:58:14.123577 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:58:14.124272 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:58:14.126951 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:58:14.127868 systemd-networkd[1427]: eth0: Link UP Jan 23 00:58:14.128307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:58:14.136590 systemd-networkd[1427]: eth0: Gained carrier Jan 23 00:58:14.136608 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:58:14.151378 systemd-resolved[1428]: Positive Trust Anchors: Jan 23 00:58:14.151790 systemd-resolved[1428]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:58:14.151820 systemd-resolved[1428]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:58:14.162844 systemd-resolved[1428]: Defaulting to hostname 'linux'. Jan 23 00:58:14.166420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:58:14.168965 systemd[1]: Reached target network.target - Network. Jan 23 00:58:14.169729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:58:14.177959 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:58:14.183055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:14.208622 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 00:58:14.211151 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:58:14.387664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:14.388837 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:58:14.389655 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:58:14.390634 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:58:14.391410 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 00:58:14.392860 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:58:14.393848 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:58:14.394610 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:58:14.395463 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:58:14.395499 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:58:14.396178 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:58:14.397834 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:58:14.400595 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:58:14.407169 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:58:14.408324 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:58:14.409429 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:58:14.412369 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:58:14.413425 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:58:14.414778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:58:14.416238 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:58:14.416916 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:58:14.417662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:58:14.417698 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:58:14.418693 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:58:14.420643 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:58:14.428434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:58:14.432195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:58:14.434453 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:58:14.449434 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:58:14.451187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:58:14.454185 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 00:58:14.457336 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:58:14.464630 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:58:14.466648 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:58:14.470647 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:58:14.474183 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 00:58:14.479150 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 00:58:14.478216 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:58:14.480232 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:58:14.481938 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:58:14.483532 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:58:14.493127 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:58:14.497622 oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 00:58:14.499256 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 00:58:14.499256 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:58:14.499256 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 00:58:14.499256 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 00:58:14.499256 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:58:14.497643 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:58:14.497686 oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 00:58:14.498230 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 00:58:14.498239 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:58:14.504068 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:58:14.513154 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 00:58:14.513408 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 00:58:14.521068 jq[1530]: true Jan 23 00:58:14.529105 coreos-metadata[1513]: Jan 23 00:58:14.526 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 00:58:14.533573 jq[1516]: false Jan 23 00:58:14.534443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:58:14.535224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:58:14.548073 extend-filesystems[1517]: Found /dev/sda6 Jan 23 00:58:14.546753 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:58:14.550786 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:58:14.555415 extend-filesystems[1517]: Found /dev/sda9 Jan 23 00:58:14.563288 extend-filesystems[1517]: Checking size of /dev/sda9 Jan 23 00:58:14.567250 dbus-daemon[1514]: [system] SELinux support is enabled Jan 23 00:58:14.567405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:58:14.572152 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:58:14.572182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:58:14.573965 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:58:14.573980 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:58:14.576520 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:58:14.577405 tar[1539]: linux-amd64/LICENSE Jan 23 00:58:14.577405 tar[1539]: linux-amd64/helm Jan 23 00:58:14.576787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:58:14.584722 jq[1538]: true Jan 23 00:58:14.592417 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:58:14.603060 update_engine[1529]: I20260123 00:58:14.597975 1529 main.cc:92] Flatcar Update Engine starting Jan 23 00:58:14.612595 extend-filesystems[1517]: Resized partition /dev/sda9 Jan 23 00:58:14.620722 extend-filesystems[1563]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:58:14.627055 update_engine[1529]: I20260123 00:58:14.617928 1529 update_check_scheduler.cc:74] Next update check in 3m52s Jan 23 00:58:14.612812 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 00:58:14.634403 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 23 00:58:14.612838 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 00:58:14.613804 systemd-logind[1528]: New seat seat0. Jan 23 00:58:14.615381 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:58:14.640950 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:58:14.641894 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:58:14.766086 bash[1580]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:58:14.772491 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:58:14.780196 systemd[1]: Starting sshkeys.service... Jan 23 00:58:14.843887 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:58:14.846236 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:58:14.916589 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:58:14.926580 systemd-networkd[1427]: eth0: DHCPv4 address 172.236.108.127/24, gateway 172.236.108.1 acquired from 23.33.176.71 Jan 23 00:58:14.926719 dbus-daemon[1514]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 00:58:14.933834 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 00:58:14.935163 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Jan 23 00:58:14.980803 coreos-metadata[1589]: Jan 23 00:58:14.980 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 00:58:15.001038 containerd[1552]: time="2026-01-23T00:58:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:58:15.007614 containerd[1552]: time="2026-01-23T00:58:15.007378490Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:58:15.037329 containerd[1552]: time="2026-01-23T00:58:15.037246575Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.91µs" Jan 23 00:58:15.037329 containerd[1552]: time="2026-01-23T00:58:15.037283885Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:58:15.037329 containerd[1552]: time="2026-01-23T00:58:15.037306975Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:58:15.037539 containerd[1552]: time="2026-01-23T00:58:15.037475795Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:58:15.037539 containerd[1552]: time="2026-01-23T00:58:15.037501195Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:58:15.037539 containerd[1552]: time="2026-01-23T00:58:15.037529525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:58:15.037642 containerd[1552]: time="2026-01-23T00:58:15.037599185Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:58:15.037642 containerd[1552]: time="2026-01-23T00:58:15.037616435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:58:15.037921 containerd[1552]: time="2026-01-23T00:58:15.037892126Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:58:15.037921 containerd[1552]: time="2026-01-23T00:58:15.037914536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:58:15.038006 containerd[1552]: time="2026-01-23T00:58:15.037929906Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:58:15.038006 containerd[1552]: time="2026-01-23T00:58:15.037941196Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.042289152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.042638663Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.042685153Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.042700263Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.042755013Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.043096243Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:58:15.043325 containerd[1552]: time="2026-01-23T00:58:15.043172223Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:58:15.050068 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 23 00:58:15.064714 extend-filesystems[1563]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 00:58:15.064714 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 00:58:15.064714 extend-filesystems[1563]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 23 00:58:15.080078 extend-filesystems[1517]: Resized filesystem in /dev/sda9 Jan 23 00:58:15.074518 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066229398Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066463998Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066479298Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066490938Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066501828Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066511558Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066521808Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066534038Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066543499Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066552439Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066563749Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066584479Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066716649Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:58:15.085225 containerd[1552]: time="2026-01-23T00:58:15.066738389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:58:15.074841 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066752259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066763209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066773489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066784549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066795529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066804819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066815269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066831939Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066841039Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066885529Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066899619Z" level=info msg="Start snapshots syncer" Jan 23 00:58:15.085541 containerd[1552]: time="2026-01-23T00:58:15.066923959Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:58:15.085773 containerd[1552]: time="2026-01-23T00:58:15.067221170Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:58:15.085773 containerd[1552]: time="2026-01-23T00:58:15.067278420Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083166643Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083399594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083423314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083434954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083445564Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083461314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083471944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083481634Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083508384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083519124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.083530284Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.085068886Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:58:15.085894 containerd[1552]: time="2026-01-23T00:58:15.085094556Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087106369Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087139509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087154389Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087168809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087189239Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087207280Z" level=info msg="runtime interface created" Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087212550Z" level=info msg="created NRI interface" Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087226280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087237900Z" level=info msg="Connect containerd service" Jan 23 00:58:15.087855 containerd[1552]: time="2026-01-23T00:58:15.087258770Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:58:15.091132 containerd[1552]: time="2026-01-23T00:58:15.089224143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:58:15.110667 coreos-metadata[1589]: Jan 23 00:58:15.110 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 23 00:58:15.121005 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 00:58:15.125766 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 00:58:15.126520 dbus-daemon[1514]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1596 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 00:58:15.134956 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 00:58:15.187716 systemd-timesyncd[1455]: Contacted time server 104.131.155.175:123 (0.flatcar.pool.ntp.org). Jan 23 00:58:15.187786 systemd-timesyncd[1455]: Initial clock synchronization to Fri 2026-01-23 00:58:15.525566 UTC. Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233105718Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233185148Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233216278Z" level=info msg="Start subscribing containerd event" Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233424999Z" level=info msg="Start recovering state" Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233515709Z" level=info msg="Start event monitor" Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233527359Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233533629Z" level=info msg="Start streaming server" Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233544579Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233552379Z" level=info msg="runtime interface starting up..." Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233558609Z" level=info msg="starting plugins..." Jan 23 00:58:15.234040 containerd[1552]: time="2026-01-23T00:58:15.233571619Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:58:15.237306 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:58:15.238256 containerd[1552]: time="2026-01-23T00:58:15.238227726Z" level=info msg="containerd successfully booted in 0.238796s" Jan 23 00:58:15.246201 coreos-metadata[1589]: Jan 23 00:58:15.246 INFO Fetch successful Jan 23 00:58:15.283031 update-ssh-keys[1618]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:58:15.284074 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:58:15.286840 polkitd[1612]: Started polkitd version 126 Jan 23 00:58:15.294198 systemd[1]: Finished sshkeys.service. Jan 23 00:58:15.299351 polkitd[1612]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 00:58:15.301422 polkitd[1612]: Loading rules from directory /run/polkit-1/rules.d Jan 23 00:58:15.301791 polkitd[1612]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:58:15.302293 polkitd[1612]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 00:58:15.302747 polkitd[1612]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:58:15.302911 polkitd[1612]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 00:58:15.305247 polkitd[1612]: Finished loading, compiling and executing 2 rules Jan 23 00:58:15.305911 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 00:58:15.306756 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 00:58:15.307846 polkitd[1612]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 00:58:15.327555 systemd-hostnamed[1596]: Hostname set to <172-236-108-127> (transient) Jan 23 00:58:15.327562 systemd-resolved[1428]: System hostname changed to '172-236-108-127'. Jan 23 00:58:15.393304 tar[1539]: linux-amd64/README.md Jan 23 00:58:15.412685 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:58:15.537086 coreos-metadata[1513]: Jan 23 00:58:15.537 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 00:58:15.641927 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:58:15.645617 coreos-metadata[1513]: Jan 23 00:58:15.645 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 23 00:58:15.667969 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:58:15.671088 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:58:15.698295 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:58:15.698603 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:58:15.701773 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:58:15.720144 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:58:15.723231 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:58:15.727320 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:58:15.728652 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:58:15.862905 coreos-metadata[1513]: Jan 23 00:58:15.862 INFO Fetch successful Jan 23 00:58:15.863061 coreos-metadata[1513]: Jan 23 00:58:15.862 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 23 00:58:15.877217 systemd-networkd[1427]: eth0: Gained IPv6LL Jan 23 00:58:15.884121 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:58:15.885832 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:58:15.889247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:15.893452 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:58:15.922306 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:58:16.123266 coreos-metadata[1513]: Jan 23 00:58:16.123 INFO Fetch successful Jan 23 00:58:16.246966 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:58:16.249276 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:58:16.805852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:16.807507 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:58:16.809990 systemd[1]: Startup finished in 3.119s (kernel) + 8.679s (initrd) + 5.534s (userspace) = 17.332s. Jan 23 00:58:16.815531 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:58:17.294572 kubelet[1686]: E0123 00:58:17.294511 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:58:17.298217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:58:17.298420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:58:17.299208 systemd[1]: kubelet.service: Consumed 828ms CPU time, 255M memory peak. Jan 23 00:58:17.478633 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:58:17.479937 systemd[1]: Started sshd@0-172.236.108.127:22-68.220.241.50:42262.service - OpenSSH per-connection server daemon (68.220.241.50:42262). Jan 23 00:58:17.660381 sshd[1698]: Accepted publickey for core from 68.220.241.50 port 42262 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:17.662258 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:17.670163 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:58:17.671560 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:58:17.680791 systemd-logind[1528]: New session 1 of user core. Jan 23 00:58:17.692662 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:58:17.696395 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:58:17.711799 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:58:17.714396 systemd-logind[1528]: New session c1 of user core. Jan 23 00:58:17.849114 systemd[1703]: Queued start job for default target default.target. Jan 23 00:58:17.856327 systemd[1703]: Created slice app.slice - User Application Slice. Jan 23 00:58:17.856416 systemd[1703]: Reached target paths.target - Paths. Jan 23 00:58:17.856545 systemd[1703]: Reached target timers.target - Timers. Jan 23 00:58:17.858080 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:58:17.869969 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:58:17.870124 systemd[1703]: Reached target sockets.target - Sockets. Jan 23 00:58:17.870172 systemd[1703]: Reached target basic.target - Basic System. Jan 23 00:58:17.870219 systemd[1703]: Reached target default.target - Main User Target. Jan 23 00:58:17.870258 systemd[1703]: Startup finished in 149ms. Jan 23 00:58:17.870652 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:58:17.879183 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:58:18.020823 systemd[1]: Started sshd@1-172.236.108.127:22-68.220.241.50:42270.service - OpenSSH per-connection server daemon (68.220.241.50:42270). Jan 23 00:58:18.191653 sshd[1714]: Accepted publickey for core from 68.220.241.50 port 42270 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:18.193769 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:18.200213 systemd-logind[1528]: New session 2 of user core. Jan 23 00:58:18.214360 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:58:18.331486 sshd[1717]: Connection closed by 68.220.241.50 port 42270 Jan 23 00:58:18.333330 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:18.339429 systemd[1]: sshd@1-172.236.108.127:22-68.220.241.50:42270.service: Deactivated successfully. Jan 23 00:58:18.341806 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:58:18.342749 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:58:18.344561 systemd-logind[1528]: Removed session 2. Jan 23 00:58:18.366672 systemd[1]: Started sshd@2-172.236.108.127:22-68.220.241.50:42282.service - OpenSSH per-connection server daemon (68.220.241.50:42282). Jan 23 00:58:18.539911 sshd[1723]: Accepted publickey for core from 68.220.241.50 port 42282 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:18.541599 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:18.546838 systemd-logind[1528]: New session 3 of user core. Jan 23 00:58:18.556203 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:58:18.669317 sshd[1726]: Connection closed by 68.220.241.50 port 42282 Jan 23 00:58:18.670214 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:18.676649 systemd[1]: sshd@2-172.236.108.127:22-68.220.241.50:42282.service: Deactivated successfully. Jan 23 00:58:18.679190 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:58:18.680442 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:58:18.682919 systemd-logind[1528]: Removed session 3. Jan 23 00:58:18.707782 systemd[1]: Started sshd@3-172.236.108.127:22-68.220.241.50:42284.service - OpenSSH per-connection server daemon (68.220.241.50:42284). Jan 23 00:58:18.903472 sshd[1732]: Accepted publickey for core from 68.220.241.50 port 42284 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:18.905975 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:18.915133 systemd-logind[1528]: New session 4 of user core. Jan 23 00:58:18.924305 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:58:19.061547 sshd[1735]: Connection closed by 68.220.241.50 port 42284 Jan 23 00:58:19.063257 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:19.068151 systemd[1]: sshd@3-172.236.108.127:22-68.220.241.50:42284.service: Deactivated successfully. Jan 23 00:58:19.071397 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:58:19.072907 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:58:19.074953 systemd-logind[1528]: Removed session 4. Jan 23 00:58:19.091328 systemd[1]: Started sshd@4-172.236.108.127:22-68.220.241.50:42292.service - OpenSSH per-connection server daemon (68.220.241.50:42292). Jan 23 00:58:19.269923 sshd[1741]: Accepted publickey for core from 68.220.241.50 port 42292 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:19.271866 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:19.280900 systemd-logind[1528]: New session 5 of user core. Jan 23 00:58:19.287218 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:58:19.395092 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:58:19.395462 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:58:19.405981 sudo[1745]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:19.428248 sshd[1744]: Connection closed by 68.220.241.50 port 42292 Jan 23 00:58:19.428842 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:19.434842 systemd[1]: sshd@4-172.236.108.127:22-68.220.241.50:42292.service: Deactivated successfully. Jan 23 00:58:19.437305 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:58:19.439801 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:58:19.441217 systemd-logind[1528]: Removed session 5. Jan 23 00:58:19.464784 systemd[1]: Started sshd@5-172.236.108.127:22-68.220.241.50:42304.service - OpenSSH per-connection server daemon (68.220.241.50:42304). Jan 23 00:58:19.668199 sshd[1751]: Accepted publickey for core from 68.220.241.50 port 42304 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:19.669850 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:19.675698 systemd-logind[1528]: New session 6 of user core. Jan 23 00:58:19.683200 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:58:19.788340 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:58:19.788707 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:58:19.795735 sudo[1756]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:19.802614 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:58:19.802982 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:58:19.813492 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:58:19.856929 augenrules[1778]: No rules Jan 23 00:58:19.858702 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:58:19.859013 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:58:19.860625 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:19.886691 sshd[1754]: Connection closed by 68.220.241.50 port 42304 Jan 23 00:58:19.885552 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:19.891162 systemd[1]: sshd@5-172.236.108.127:22-68.220.241.50:42304.service: Deactivated successfully. Jan 23 00:58:19.893878 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:58:19.896274 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:58:19.898764 systemd-logind[1528]: Removed session 6. Jan 23 00:58:19.917654 systemd[1]: Started sshd@6-172.236.108.127:22-68.220.241.50:42310.service - OpenSSH per-connection server daemon (68.220.241.50:42310). Jan 23 00:58:20.094667 sshd[1787]: Accepted publickey for core from 68.220.241.50 port 42310 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 00:58:20.096533 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:58:20.102756 systemd-logind[1528]: New session 7 of user core. Jan 23 00:58:20.113197 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:58:20.210878 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:58:20.211270 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:58:20.519637 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:58:20.541494 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:58:20.767486 dockerd[1809]: time="2026-01-23T00:58:20.767425631Z" level=info msg="Starting up" Jan 23 00:58:20.771522 dockerd[1809]: time="2026-01-23T00:58:20.770854551Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:58:20.783587 dockerd[1809]: time="2026-01-23T00:58:20.783561418Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:58:20.860584 dockerd[1809]: time="2026-01-23T00:58:20.860545585Z" level=info msg="Loading containers: start." Jan 23 00:58:20.871056 kernel: Initializing XFRM netlink socket Jan 23 00:58:21.159711 systemd-networkd[1427]: docker0: Link UP Jan 23 00:58:21.163218 dockerd[1809]: time="2026-01-23T00:58:21.163164357Z" level=info msg="Loading containers: done." Jan 23 00:58:21.181878 dockerd[1809]: time="2026-01-23T00:58:21.181826666Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:58:21.182082 dockerd[1809]: time="2026-01-23T00:58:21.181900911Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:58:21.182082 dockerd[1809]: time="2026-01-23T00:58:21.181988723Z" level=info msg="Initializing buildkit" Jan 23 00:58:21.204679 dockerd[1809]: time="2026-01-23T00:58:21.204593937Z" level=info msg="Completed buildkit initialization" Jan 23 00:58:21.213299 dockerd[1809]: time="2026-01-23T00:58:21.213254293Z" level=info msg="Daemon has completed initialization" Jan 23 00:58:21.213404 dockerd[1809]: time="2026-01-23T00:58:21.213312508Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:58:21.213636 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:58:22.315144 containerd[1552]: time="2026-01-23T00:58:22.314293615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 00:58:23.002623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291787523.mount: Deactivated successfully. Jan 23 00:58:24.034238 containerd[1552]: time="2026-01-23T00:58:24.033826499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:24.040120 containerd[1552]: time="2026-01-23T00:58:24.036904293Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068079" Jan 23 00:58:24.040120 containerd[1552]: time="2026-01-23T00:58:24.037483015Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:24.040547 containerd[1552]: time="2026-01-23T00:58:24.040453027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:24.041718 containerd[1552]: time="2026-01-23T00:58:24.041692373Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.726602183s" Jan 23 00:58:24.042172 containerd[1552]: time="2026-01-23T00:58:24.042111146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 00:58:24.049290 containerd[1552]: time="2026-01-23T00:58:24.049210268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 00:58:25.055345 containerd[1552]: time="2026-01-23T00:58:25.055286436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.056301 containerd[1552]: time="2026-01-23T00:58:25.056146757Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162446" Jan 23 00:58:25.056934 containerd[1552]: time="2026-01-23T00:58:25.056911926Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.058917 containerd[1552]: time="2026-01-23T00:58:25.058898172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.060162 containerd[1552]: time="2026-01-23T00:58:25.059730696Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.010489895s" Jan 23 00:58:25.060162 containerd[1552]: time="2026-01-23T00:58:25.059764399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 00:58:25.060774 containerd[1552]: time="2026-01-23T00:58:25.060747430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 00:58:25.869825 containerd[1552]: time="2026-01-23T00:58:25.869746899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.870705 containerd[1552]: time="2026-01-23T00:58:25.870673392Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725933" Jan 23 00:58:25.871230 containerd[1552]: time="2026-01-23T00:58:25.871187794Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.873597 containerd[1552]: time="2026-01-23T00:58:25.873166334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.874087 containerd[1552]: time="2026-01-23T00:58:25.874058850Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 813.283267ms" Jan 23 00:58:25.874130 containerd[1552]: time="2026-01-23T00:58:25.874090254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 00:58:25.875151 containerd[1552]: time="2026-01-23T00:58:25.875077488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 00:58:26.822538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456215252.mount: Deactivated successfully. Jan 23 00:58:27.089344 containerd[1552]: time="2026-01-23T00:58:27.089231105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:27.090174 containerd[1552]: time="2026-01-23T00:58:27.090153329Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965299" Jan 23 00:58:27.090730 containerd[1552]: time="2026-01-23T00:58:27.090685535Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:27.092608 containerd[1552]: time="2026-01-23T00:58:27.092570776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:27.093113 containerd[1552]: time="2026-01-23T00:58:27.092919243Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.217676546s" Jan 23 00:58:27.093113 containerd[1552]: time="2026-01-23T00:58:27.092948121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 00:58:27.093497 containerd[1552]: time="2026-01-23T00:58:27.093473340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 00:58:27.489284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:58:27.492126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:27.610007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167734967.mount: Deactivated successfully. Jan 23 00:58:27.757290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:27.771415 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:58:27.841324 kubelet[2114]: E0123 00:58:27.841231 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:58:27.851135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:58:27.851334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:58:27.851837 systemd[1]: kubelet.service: Consumed 227ms CPU time, 110.4M memory peak. Jan 23 00:58:28.428508 containerd[1552]: time="2026-01-23T00:58:28.428455075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:28.429531 containerd[1552]: time="2026-01-23T00:58:28.429453875Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Jan 23 00:58:28.430055 containerd[1552]: time="2026-01-23T00:58:28.430003413Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:28.432278 containerd[1552]: time="2026-01-23T00:58:28.432241772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:28.433643 containerd[1552]: time="2026-01-23T00:58:28.433121980Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.339149663s" Jan 23 00:58:28.433643 containerd[1552]: time="2026-01-23T00:58:28.433149734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 00:58:28.433984 containerd[1552]: time="2026-01-23T00:58:28.433960123Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 00:58:28.877881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442722164.mount: Deactivated successfully. Jan 23 00:58:28.883559 containerd[1552]: time="2026-01-23T00:58:28.883279833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:28.884683 containerd[1552]: time="2026-01-23T00:58:28.884642009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Jan 23 00:58:28.888440 containerd[1552]: time="2026-01-23T00:58:28.888401546Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:28.892040 containerd[1552]: time="2026-01-23T00:58:28.891762136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:28.892662 containerd[1552]: time="2026-01-23T00:58:28.892606775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 458.547272ms" Jan 23 00:58:28.892662 containerd[1552]: time="2026-01-23T00:58:28.892639269Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 00:58:28.893376 containerd[1552]: time="2026-01-23T00:58:28.893309041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 00:58:29.363181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817280950.mount: Deactivated successfully. Jan 23 00:58:31.273967 containerd[1552]: time="2026-01-23T00:58:31.273918308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:31.274883 containerd[1552]: time="2026-01-23T00:58:31.274850061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166820" Jan 23 00:58:31.277066 containerd[1552]: time="2026-01-23T00:58:31.275542778Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:31.278851 containerd[1552]: time="2026-01-23T00:58:31.278801163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:31.280217 containerd[1552]: time="2026-01-23T00:58:31.280177245Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.386837363s" Jan 23 00:58:31.280217 containerd[1552]: time="2026-01-23T00:58:31.280209528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 00:58:33.535231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:33.537258 systemd[1]: kubelet.service: Consumed 227ms CPU time, 110.4M memory peak. Jan 23 00:58:33.541139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:33.571123 systemd[1]: Reload requested from client PID 2246 ('systemctl') (unit session-7.scope)... Jan 23 00:58:33.571136 systemd[1]: Reloading... Jan 23 00:58:33.716748 zram_generator::config[2299]: No configuration found. Jan 23 00:58:33.933141 systemd[1]: Reloading finished in 361 ms. Jan 23 00:58:33.996612 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:58:33.996720 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:58:33.997079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:33.997131 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.2M memory peak. Jan 23 00:58:33.999080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:34.185058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:34.194398 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:58:34.232453 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:58:34.232453 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:58:34.232968 kubelet[2344]: I0123 00:58:34.232684 2344 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:58:34.453968 kubelet[2344]: I0123 00:58:34.453589 2344 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:58:34.453968 kubelet[2344]: I0123 00:58:34.453617 2344 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:58:34.454518 kubelet[2344]: I0123 00:58:34.454499 2344 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:58:34.454562 kubelet[2344]: I0123 00:58:34.454525 2344 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:58:34.454836 kubelet[2344]: I0123 00:58:34.454815 2344 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:58:34.460963 kubelet[2344]: E0123 00:58:34.460932 2344 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.108.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:58:34.462530 kubelet[2344]: I0123 00:58:34.462354 2344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:58:34.472254 kubelet[2344]: I0123 00:58:34.472219 2344 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:58:34.476774 kubelet[2344]: I0123 00:58:34.476758 2344 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:58:34.478321 kubelet[2344]: I0123 00:58:34.478279 2344 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:58:34.478446 kubelet[2344]: I0123 00:58:34.478312 2344 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-108-127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:58:34.478446 kubelet[2344]: I0123 00:58:34.478443 2344 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:58:34.478622 kubelet[2344]: I0123 00:58:34.478453 2344 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:58:34.478622 kubelet[2344]: I0123 00:58:34.478543 2344 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:58:34.481900 kubelet[2344]: I0123 00:58:34.481874 2344 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:34.482759 kubelet[2344]: I0123 00:58:34.482324 2344 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:58:34.482759 kubelet[2344]: I0123 00:58:34.482346 2344 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:58:34.482759 kubelet[2344]: I0123 00:58:34.482365 2344 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:58:34.482759 kubelet[2344]: I0123 00:58:34.482383 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:58:34.485982 kubelet[2344]: E0123 00:58:34.485746 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.108.127:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-108-127&limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:58:34.485982 kubelet[2344]: E0123 00:58:34.485886 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.108.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:58:34.486836 kubelet[2344]: I0123 00:58:34.486821 2344 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:58:34.487426 kubelet[2344]: I0123 00:58:34.487407 2344 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:58:34.487527 kubelet[2344]: I0123 00:58:34.487514 2344 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:58:34.487611 kubelet[2344]: W0123 00:58:34.487600 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:58:34.492279 kubelet[2344]: I0123 00:58:34.492265 2344 server.go:1262] "Started kubelet" Jan 23 00:58:34.493756 kubelet[2344]: I0123 00:58:34.493738 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:58:34.498428 kubelet[2344]: E0123 00:58:34.497047 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.108.127:6443/api/v1/namespaces/default/events\": dial tcp 172.236.108.127:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-108-127.188d3648b2d3a791 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-108-127,UID:172-236-108-127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-108-127,},FirstTimestamp:2026-01-23 00:58:34.492233617 +0000 UTC m=+0.293730449,LastTimestamp:2026-01-23 00:58:34.492233617 +0000 UTC m=+0.293730449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-108-127,}" Jan 23 00:58:34.498897 kubelet[2344]: I0123 00:58:34.498848 2344 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:58:34.500874 kubelet[2344]: I0123 00:58:34.500531 2344 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:58:34.505231 kubelet[2344]: I0123 00:58:34.505189 2344 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:58:34.505282 kubelet[2344]: I0123 00:58:34.505245 2344 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:58:34.505506 kubelet[2344]: I0123 00:58:34.505478 2344 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:58:34.505621 kubelet[2344]: I0123 00:58:34.505607 2344 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:58:34.505748 kubelet[2344]: I0123 00:58:34.505713 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:58:34.505977 kubelet[2344]: E0123 00:58:34.505959 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:34.508331 kubelet[2344]: E0123 00:58:34.508270 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-127?timeout=10s\": dial tcp 172.236.108.127:6443: connect: connection refused" interval="200ms" Jan 23 00:58:34.508627 kubelet[2344]: I0123 00:58:34.508601 2344 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:58:34.508691 kubelet[2344]: I0123 00:58:34.508645 2344 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:58:34.508995 kubelet[2344]: E0123 00:58:34.508951 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.108.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:34.509762 kubelet[2344]: I0123 00:58:34.509538 2344 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:58:34.510059 kubelet[2344]: I0123 00:58:34.509809 2344 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:58:34.511547 kubelet[2344]: E0123 00:58:34.511344 2344 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:58:34.511547 kubelet[2344]: I0123 00:58:34.511483 2344 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:58:34.530217 kubelet[2344]: I0123 00:58:34.529928 2344 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:58:34.531473 kubelet[2344]: I0123 00:58:34.531443 2344 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:58:34.531473 kubelet[2344]: I0123 00:58:34.531467 2344 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:58:34.531560 kubelet[2344]: I0123 00:58:34.531491 2344 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:58:34.531560 kubelet[2344]: E0123 00:58:34.531543 2344 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:58:34.539276 kubelet[2344]: E0123 00:58:34.539125 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.108.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:58:34.540549 kubelet[2344]: I0123 00:58:34.540532 2344 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:58:34.540618 kubelet[2344]: I0123 00:58:34.540608 2344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:58:34.540715 kubelet[2344]: I0123 00:58:34.540704 2344 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:34.541933 kubelet[2344]: I0123 00:58:34.541906 2344 policy_none.go:49] "None policy: Start" Jan 23 00:58:34.542007 kubelet[2344]: I0123 00:58:34.541998 2344 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:58:34.542266 kubelet[2344]: I0123 00:58:34.542059 2344 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:58:34.543061 kubelet[2344]: I0123 00:58:34.543041 2344 policy_none.go:47] "Start" Jan 23 00:58:34.548438 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:58:34.564878 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:58:34.569110 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:58:34.578616 kubelet[2344]: E0123 00:58:34.578305 2344 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:58:34.578979 kubelet[2344]: I0123 00:58:34.578951 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:58:34.579164 kubelet[2344]: I0123 00:58:34.578974 2344 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:58:34.579512 kubelet[2344]: I0123 00:58:34.579477 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:58:34.581040 kubelet[2344]: E0123 00:58:34.580878 2344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:58:34.581040 kubelet[2344]: E0123 00:58:34.580926 2344 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-108-127\" not found" Jan 23 00:58:34.645898 systemd[1]: Created slice kubepods-burstable-podca2df2f0d8e3655e8747e5d020966a3a.slice - libcontainer container kubepods-burstable-podca2df2f0d8e3655e8747e5d020966a3a.slice. Jan 23 00:58:34.658573 kubelet[2344]: E0123 00:58:34.658505 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:34.663212 systemd[1]: Created slice kubepods-burstable-pod6ecc4feaa59efec8b1079b4a216a5d5e.slice - libcontainer container kubepods-burstable-pod6ecc4feaa59efec8b1079b4a216a5d5e.slice. Jan 23 00:58:34.666874 kubelet[2344]: E0123 00:58:34.666841 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:34.669491 systemd[1]: Created slice kubepods-burstable-pode90efd1fa51d02b179abeb2efbd8bf50.slice - libcontainer container kubepods-burstable-pode90efd1fa51d02b179abeb2efbd8bf50.slice. Jan 23 00:58:34.672072 kubelet[2344]: E0123 00:58:34.672047 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:34.681752 kubelet[2344]: I0123 00:58:34.681674 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-127" Jan 23 00:58:34.682174 kubelet[2344]: E0123 00:58:34.682139 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.108.127:6443/api/v1/nodes\": dial tcp 172.236.108.127:6443: connect: connection refused" node="172-236-108-127" Jan 23 00:58:34.709194 kubelet[2344]: E0123 00:58:34.709066 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-127?timeout=10s\": dial tcp 172.236.108.127:6443: connect: connection refused" interval="400ms" Jan 23 00:58:34.710224 kubelet[2344]: I0123 00:58:34.710164 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ecc4feaa59efec8b1079b4a216a5d5e-k8s-certs\") pod \"kube-apiserver-172-236-108-127\" (UID: \"6ecc4feaa59efec8b1079b4a216a5d5e\") " pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:34.710429 kubelet[2344]: I0123 00:58:34.710382 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ecc4feaa59efec8b1079b4a216a5d5e-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-108-127\" (UID: \"6ecc4feaa59efec8b1079b4a216a5d5e\") " pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:34.710538 kubelet[2344]: I0123 00:58:34.710521 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-flexvolume-dir\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:34.710702 kubelet[2344]: I0123 00:58:34.710642 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-k8s-certs\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:34.710819 kubelet[2344]: I0123 00:58:34.710770 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca2df2f0d8e3655e8747e5d020966a3a-kubeconfig\") pod \"kube-scheduler-172-236-108-127\" (UID: \"ca2df2f0d8e3655e8747e5d020966a3a\") " pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:34.710819 kubelet[2344]: I0123 00:58:34.710795 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ecc4feaa59efec8b1079b4a216a5d5e-ca-certs\") pod \"kube-apiserver-172-236-108-127\" (UID: \"6ecc4feaa59efec8b1079b4a216a5d5e\") " pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:34.710942 kubelet[2344]: I0123 00:58:34.710928 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-ca-certs\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:34.711134 kubelet[2344]: I0123 00:58:34.710984 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-kubeconfig\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:34.711134 kubelet[2344]: I0123 00:58:34.711052 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:34.884449 kubelet[2344]: I0123 00:58:34.884417 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-127" Jan 23 00:58:34.884760 kubelet[2344]: E0123 00:58:34.884738 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.108.127:6443/api/v1/nodes\": dial tcp 172.236.108.127:6443: connect: connection refused" node="172-236-108-127" Jan 23 00:58:34.962565 kubelet[2344]: E0123 00:58:34.962418 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:34.963594 containerd[1552]: time="2026-01-23T00:58:34.963552008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-108-127,Uid:ca2df2f0d8e3655e8747e5d020966a3a,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:34.969414 kubelet[2344]: E0123 00:58:34.969005 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:34.969884 containerd[1552]: time="2026-01-23T00:58:34.969830705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-108-127,Uid:6ecc4feaa59efec8b1079b4a216a5d5e,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:34.973861 kubelet[2344]: E0123 00:58:34.973825 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:34.974302 containerd[1552]: time="2026-01-23T00:58:34.974277037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-108-127,Uid:e90efd1fa51d02b179abeb2efbd8bf50,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:35.109916 kubelet[2344]: E0123 00:58:35.109848 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-127?timeout=10s\": dial tcp 172.236.108.127:6443: connect: connection refused" interval="800ms" Jan 23 00:58:35.286990 kubelet[2344]: I0123 00:58:35.286944 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-127" Jan 23 00:58:35.287546 kubelet[2344]: E0123 00:58:35.287420 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.108.127:6443/api/v1/nodes\": dial tcp 172.236.108.127:6443: connect: connection refused" node="172-236-108-127" Jan 23 00:58:35.436281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635685342.mount: Deactivated successfully. Jan 23 00:58:35.442737 containerd[1552]: time="2026-01-23T00:58:35.442676859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:35.443958 containerd[1552]: time="2026-01-23T00:58:35.443900907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:35.445091 containerd[1552]: time="2026-01-23T00:58:35.445060752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 23 00:58:35.445451 containerd[1552]: time="2026-01-23T00:58:35.445297732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:58:35.446887 containerd[1552]: time="2026-01-23T00:58:35.446848710Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:35.448187 containerd[1552]: time="2026-01-23T00:58:35.448124030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:58:35.450833 containerd[1552]: time="2026-01-23T00:58:35.450787986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.66797ms" Jan 23 00:58:35.452203 containerd[1552]: time="2026-01-23T00:58:35.452148860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:35.455045 containerd[1552]: time="2026-01-23T00:58:35.453489969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.34693ms" Jan 23 00:58:35.455045 containerd[1552]: time="2026-01-23T00:58:35.454549238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:35.463309 containerd[1552]: time="2026-01-23T00:58:35.463260333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.956321ms" Jan 23 00:58:35.482611 containerd[1552]: time="2026-01-23T00:58:35.482479562Z" level=info msg="connecting to shim 242240fc9c74cdb4c82d0e66e94270f591b25b5a01184d4ae1548e12e9e51656" address="unix:///run/containerd/s/70a475811d5dabeff5dc95aa9fb9392f2677158f15ee647a058b250a3c18b062" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:35.497622 containerd[1552]: time="2026-01-23T00:58:35.497549925Z" level=info msg="connecting to shim 2247fe4d66abdf03b3f60d9017a61a0d223f459e34530964a335e9a4103c954a" address="unix:///run/containerd/s/3acc66167d1f4cd1be0390a1091596f15fc3c5827f1bfc92455288de7c253a8c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:35.502545 kubelet[2344]: E0123 00:58:35.502501 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.108.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:58:35.512375 containerd[1552]: time="2026-01-23T00:58:35.512236441Z" level=info msg="connecting to shim 9c3f56e58f88d27f0953bdc8f4778778d00f7029c7b906038e6898fb46b3c976" address="unix:///run/containerd/s/830e925e395761083df41327b93e796f3f8a62e57fc916be6248a80ea463140c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:35.516798 kubelet[2344]: E0123 00:58:35.515594 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.108.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:35.522277 systemd[1]: Started cri-containerd-242240fc9c74cdb4c82d0e66e94270f591b25b5a01184d4ae1548e12e9e51656.scope - libcontainer container 242240fc9c74cdb4c82d0e66e94270f591b25b5a01184d4ae1548e12e9e51656. Jan 23 00:58:35.535291 systemd[1]: Started cri-containerd-2247fe4d66abdf03b3f60d9017a61a0d223f459e34530964a335e9a4103c954a.scope - libcontainer container 2247fe4d66abdf03b3f60d9017a61a0d223f459e34530964a335e9a4103c954a. Jan 23 00:58:35.561335 systemd[1]: Started cri-containerd-9c3f56e58f88d27f0953bdc8f4778778d00f7029c7b906038e6898fb46b3c976.scope - libcontainer container 9c3f56e58f88d27f0953bdc8f4778778d00f7029c7b906038e6898fb46b3c976. Jan 23 00:58:35.582136 kubelet[2344]: E0123 00:58:35.582074 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.108.127:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-108-127&limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:58:35.620997 containerd[1552]: time="2026-01-23T00:58:35.620939248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-108-127,Uid:ca2df2f0d8e3655e8747e5d020966a3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"242240fc9c74cdb4c82d0e66e94270f591b25b5a01184d4ae1548e12e9e51656\"" Jan 23 00:58:35.621572 kubelet[2344]: E0123 00:58:35.621441 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.108.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.108.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:58:35.626397 kubelet[2344]: E0123 00:58:35.626073 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:35.633580 containerd[1552]: time="2026-01-23T00:58:35.632970938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-108-127,Uid:6ecc4feaa59efec8b1079b4a216a5d5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2247fe4d66abdf03b3f60d9017a61a0d223f459e34530964a335e9a4103c954a\"" Jan 23 00:58:35.635550 kubelet[2344]: E0123 00:58:35.635355 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:35.639772 containerd[1552]: time="2026-01-23T00:58:35.639713032Z" level=info msg="CreateContainer within sandbox \"242240fc9c74cdb4c82d0e66e94270f591b25b5a01184d4ae1548e12e9e51656\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:58:35.641759 containerd[1552]: time="2026-01-23T00:58:35.641718223Z" level=info msg="CreateContainer within sandbox \"2247fe4d66abdf03b3f60d9017a61a0d223f459e34530964a335e9a4103c954a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:58:35.651236 containerd[1552]: time="2026-01-23T00:58:35.651163735Z" level=info msg="Container ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:35.656957 containerd[1552]: time="2026-01-23T00:58:35.656930069Z" level=info msg="Container 75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:35.660955 containerd[1552]: time="2026-01-23T00:58:35.660914415Z" level=info msg="CreateContainer within sandbox \"242240fc9c74cdb4c82d0e66e94270f591b25b5a01184d4ae1548e12e9e51656\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5\"" Jan 23 00:58:35.662407 containerd[1552]: time="2026-01-23T00:58:35.662373707Z" level=info msg="StartContainer for \"ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5\"" Jan 23 00:58:35.663800 containerd[1552]: time="2026-01-23T00:58:35.663774695Z" level=info msg="CreateContainer within sandbox \"2247fe4d66abdf03b3f60d9017a61a0d223f459e34530964a335e9a4103c954a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08\"" Jan 23 00:58:35.663938 containerd[1552]: time="2026-01-23T00:58:35.663874618Z" level=info msg="connecting to shim ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5" address="unix:///run/containerd/s/70a475811d5dabeff5dc95aa9fb9392f2677158f15ee647a058b250a3c18b062" protocol=ttrpc version=3 Jan 23 00:58:35.664712 containerd[1552]: time="2026-01-23T00:58:35.664689463Z" level=info msg="StartContainer for \"75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08\"" Jan 23 00:58:35.666647 containerd[1552]: time="2026-01-23T00:58:35.666625664Z" level=info msg="connecting to shim 75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08" address="unix:///run/containerd/s/3acc66167d1f4cd1be0390a1091596f15fc3c5827f1bfc92455288de7c253a8c" protocol=ttrpc version=3 Jan 23 00:58:35.675196 containerd[1552]: time="2026-01-23T00:58:35.675145784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-108-127,Uid:e90efd1fa51d02b179abeb2efbd8bf50,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c3f56e58f88d27f0953bdc8f4778778d00f7029c7b906038e6898fb46b3c976\"" Jan 23 00:58:35.676042 kubelet[2344]: E0123 00:58:35.675991 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:35.680082 containerd[1552]: time="2026-01-23T00:58:35.680061272Z" level=info msg="CreateContainer within sandbox \"9c3f56e58f88d27f0953bdc8f4778778d00f7029c7b906038e6898fb46b3c976\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:58:35.690516 containerd[1552]: time="2026-01-23T00:58:35.690457342Z" level=info msg="Container 41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:35.694170 systemd[1]: Started cri-containerd-ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5.scope - libcontainer container ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5. Jan 23 00:58:35.699897 containerd[1552]: time="2026-01-23T00:58:35.699875392Z" level=info msg="CreateContainer within sandbox \"9c3f56e58f88d27f0953bdc8f4778778d00f7029c7b906038e6898fb46b3c976\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3\"" Jan 23 00:58:35.702240 containerd[1552]: time="2026-01-23T00:58:35.702219874Z" level=info msg="StartContainer for \"41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3\"" Jan 23 00:58:35.705167 systemd[1]: Started cri-containerd-75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08.scope - libcontainer container 75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08. Jan 23 00:58:35.705270 containerd[1552]: time="2026-01-23T00:58:35.705155014Z" level=info msg="connecting to shim 41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3" address="unix:///run/containerd/s/830e925e395761083df41327b93e796f3f8a62e57fc916be6248a80ea463140c" protocol=ttrpc version=3 Jan 23 00:58:35.739510 systemd[1]: Started cri-containerd-41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3.scope - libcontainer container 41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3. Jan 23 00:58:35.807652 containerd[1552]: time="2026-01-23T00:58:35.807528150Z" level=info msg="StartContainer for \"ed9841fd8982a869f116e88dc6431ab184e47fb72c00073bc50b5eca930927d5\" returns successfully" Jan 23 00:58:35.825300 containerd[1552]: time="2026-01-23T00:58:35.825182093Z" level=info msg="StartContainer for \"75eee7a10bd681fd52d4d0e4c3dcb9603c81606c6420a9ef4bab2c4164bf7d08\" returns successfully" Jan 23 00:58:35.872203 containerd[1552]: time="2026-01-23T00:58:35.872137377Z" level=info msg="StartContainer for \"41c33988b824bb0e15f1a502dfbd70c39f3b905b22625f09a4d843c0fa49d8e3\" returns successfully" Jan 23 00:58:35.910826 kubelet[2344]: E0123 00:58:35.910573 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-127?timeout=10s\": dial tcp 172.236.108.127:6443: connect: connection refused" interval="1.6s" Jan 23 00:58:36.091505 kubelet[2344]: I0123 00:58:36.090759 2344 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-127" Jan 23 00:58:36.551046 kubelet[2344]: E0123 00:58:36.550720 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:36.551046 kubelet[2344]: E0123 00:58:36.550884 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:36.559202 kubelet[2344]: E0123 00:58:36.559185 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:36.559573 kubelet[2344]: E0123 00:58:36.559558 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:36.559704 kubelet[2344]: E0123 00:58:36.559691 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:36.559963 kubelet[2344]: E0123 00:58:36.559923 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:37.562250 kubelet[2344]: E0123 00:58:37.562198 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:37.562806 kubelet[2344]: E0123 00:58:37.562324 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:37.562806 kubelet[2344]: E0123 00:58:37.562615 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:37.562806 kubelet[2344]: E0123 00:58:37.562721 2344 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:37.607262 kubelet[2344]: E0123 00:58:37.607188 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-108-127\" not found" node="172-236-108-127" Jan 23 00:58:37.740304 kubelet[2344]: E0123 00:58:37.740090 2344 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-236-108-127.188d3648b2d3a791 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-108-127,UID:172-236-108-127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-108-127,},FirstTimestamp:2026-01-23 00:58:34.492233617 +0000 UTC m=+0.293730449,LastTimestamp:2026-01-23 00:58:34.492233617 +0000 UTC m=+0.293730449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-108-127,}" Jan 23 00:58:37.797519 kubelet[2344]: I0123 00:58:37.797480 2344 kubelet_node_status.go:78] "Successfully registered node" node="172-236-108-127" Jan 23 00:58:37.797519 kubelet[2344]: E0123 00:58:37.797520 2344 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-236-108-127\": node \"172-236-108-127\" not found" Jan 23 00:58:37.815541 kubelet[2344]: E0123 00:58:37.815448 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:37.915664 kubelet[2344]: E0123 00:58:37.915542 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:38.015821 kubelet[2344]: E0123 00:58:38.015765 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:38.116380 kubelet[2344]: E0123 00:58:38.116251 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:38.216963 kubelet[2344]: E0123 00:58:38.216900 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:38.317377 kubelet[2344]: E0123 00:58:38.317338 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-127\" not found" Jan 23 00:58:38.407542 kubelet[2344]: I0123 00:58:38.407420 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:38.412601 kubelet[2344]: E0123 00:58:38.412564 2344 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-108-127\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:38.412601 kubelet[2344]: I0123 00:58:38.412591 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:38.414145 kubelet[2344]: E0123 00:58:38.414119 2344 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-108-127\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:38.414145 kubelet[2344]: I0123 00:58:38.414140 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:38.415577 kubelet[2344]: E0123 00:58:38.415545 2344 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-108-127\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:38.489208 kubelet[2344]: I0123 00:58:38.488878 2344 apiserver.go:52] "Watching apiserver" Jan 23 00:58:38.509396 kubelet[2344]: I0123 00:58:38.509356 2344 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:58:39.644970 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Jan 23 00:58:39.644993 systemd[1]: Reloading... Jan 23 00:58:39.769094 zram_generator::config[2673]: No configuration found. Jan 23 00:58:40.043231 systemd[1]: Reloading finished in 397 ms. Jan 23 00:58:40.068834 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:40.080394 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:58:40.080688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:40.080746 systemd[1]: kubelet.service: Consumed 722ms CPU time, 125.1M memory peak. Jan 23 00:58:40.085011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:40.295399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:40.303806 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:58:40.363041 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:58:40.363041 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:58:40.363419 kubelet[2724]: I0123 00:58:40.363103 2724 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:58:40.369887 kubelet[2724]: I0123 00:58:40.369853 2724 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:58:40.369887 kubelet[2724]: I0123 00:58:40.369874 2724 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:58:40.369993 kubelet[2724]: I0123 00:58:40.369899 2724 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:58:40.369993 kubelet[2724]: I0123 00:58:40.369906 2724 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:58:40.370235 kubelet[2724]: I0123 00:58:40.370147 2724 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:58:40.371266 kubelet[2724]: I0123 00:58:40.371243 2724 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 00:58:40.373171 kubelet[2724]: I0123 00:58:40.373132 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:58:40.381222 kubelet[2724]: I0123 00:58:40.381198 2724 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:58:40.385533 kubelet[2724]: I0123 00:58:40.385500 2724 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:58:40.385805 kubelet[2724]: I0123 00:58:40.385767 2724 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:58:40.385916 kubelet[2724]: I0123 00:58:40.385797 2724 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-108-127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:58:40.385916 kubelet[2724]: I0123 00:58:40.385914 2724 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:58:40.386114 kubelet[2724]: I0123 00:58:40.385923 2724 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:58:40.386114 kubelet[2724]: I0123 00:58:40.385944 2724 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:58:40.387079 kubelet[2724]: I0123 00:58:40.387042 2724 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:40.387448 kubelet[2724]: I0123 00:58:40.387424 2724 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:58:40.387448 kubelet[2724]: I0123 00:58:40.387442 2724 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:58:40.389137 kubelet[2724]: I0123 00:58:40.389082 2724 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:58:40.389137 kubelet[2724]: I0123 00:58:40.389111 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:58:40.391747 kubelet[2724]: I0123 00:58:40.391691 2724 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:58:40.392282 kubelet[2724]: I0123 00:58:40.392117 2724 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:58:40.392282 kubelet[2724]: I0123 00:58:40.392143 2724 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:58:40.396994 kubelet[2724]: I0123 00:58:40.396974 2724 server.go:1262] "Started kubelet" Jan 23 00:58:40.398720 kubelet[2724]: I0123 00:58:40.398464 2724 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:58:40.398869 kubelet[2724]: I0123 00:58:40.398847 2724 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:58:40.399387 kubelet[2724]: I0123 00:58:40.399375 2724 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:58:40.399528 kubelet[2724]: I0123 00:58:40.399503 2724 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:58:40.400494 kubelet[2724]: I0123 00:58:40.400479 2724 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:58:40.403185 kubelet[2724]: I0123 00:58:40.400532 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:58:40.408793 kubelet[2724]: I0123 00:58:40.408742 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:58:40.419815 kubelet[2724]: I0123 00:58:40.419692 2724 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:58:40.420490 kubelet[2724]: I0123 00:58:40.420115 2724 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:58:40.420490 kubelet[2724]: I0123 00:58:40.420462 2724 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:58:40.424049 kubelet[2724]: I0123 00:58:40.423916 2724 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:58:40.424049 kubelet[2724]: I0123 00:58:40.424005 2724 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:58:40.430551 kubelet[2724]: E0123 00:58:40.430479 2724 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:58:40.431580 kubelet[2724]: I0123 00:58:40.431518 2724 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:58:40.438140 kubelet[2724]: I0123 00:58:40.438108 2724 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:58:40.438140 kubelet[2724]: I0123 00:58:40.438128 2724 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:58:40.438222 kubelet[2724]: I0123 00:58:40.438147 2724 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:58:40.438222 kubelet[2724]: E0123 00:58:40.438212 2724 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:58:40.440542 kubelet[2724]: I0123 00:58:40.440511 2724 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484537 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484561 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484614 2724 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484777 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484831 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484865 2724 policy_none.go:49] "None policy: Start" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484877 2724 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.484892 2724 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.485050 2724 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 00:58:40.485058 kubelet[2724]: I0123 00:58:40.485067 2724 policy_none.go:47] "Start" Jan 23 00:58:40.491532 kubelet[2724]: E0123 00:58:40.491335 2724 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:58:40.491712 kubelet[2724]: I0123 00:58:40.491687 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:58:40.491744 kubelet[2724]: I0123 00:58:40.491707 2724 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:58:40.493061 kubelet[2724]: I0123 00:58:40.492927 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:58:40.495331 kubelet[2724]: E0123 00:58:40.495308 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:58:40.538969 kubelet[2724]: I0123 00:58:40.538933 2724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:40.539360 kubelet[2724]: I0123 00:58:40.538942 2724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:40.539512 kubelet[2724]: I0123 00:58:40.539131 2724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:40.600981 kubelet[2724]: I0123 00:58:40.599935 2724 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-127" Jan 23 00:58:40.608737 kubelet[2724]: I0123 00:58:40.608514 2724 kubelet_node_status.go:124] "Node was previously registered" node="172-236-108-127" Jan 23 00:58:40.609110 kubelet[2724]: I0123 00:58:40.609083 2724 kubelet_node_status.go:78] "Successfully registered node" node="172-236-108-127" Jan 23 00:58:40.626327 kubelet[2724]: I0123 00:58:40.626295 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-kubeconfig\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:40.626327 kubelet[2724]: I0123 00:58:40.626326 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:40.626442 kubelet[2724]: I0123 00:58:40.626363 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca2df2f0d8e3655e8747e5d020966a3a-kubeconfig\") pod \"kube-scheduler-172-236-108-127\" (UID: \"ca2df2f0d8e3655e8747e5d020966a3a\") " pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:40.626442 kubelet[2724]: I0123 00:58:40.626388 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ecc4feaa59efec8b1079b4a216a5d5e-k8s-certs\") pod \"kube-apiserver-172-236-108-127\" (UID: \"6ecc4feaa59efec8b1079b4a216a5d5e\") " pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:40.626442 kubelet[2724]: I0123 00:58:40.626411 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-ca-certs\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:40.626519 kubelet[2724]: I0123 00:58:40.626476 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-flexvolume-dir\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:40.626519 kubelet[2724]: I0123 00:58:40.626497 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e90efd1fa51d02b179abeb2efbd8bf50-k8s-certs\") pod \"kube-controller-manager-172-236-108-127\" (UID: \"e90efd1fa51d02b179abeb2efbd8bf50\") " pod="kube-system/kube-controller-manager-172-236-108-127" Jan 23 00:58:40.626519 kubelet[2724]: I0123 00:58:40.626511 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ecc4feaa59efec8b1079b4a216a5d5e-ca-certs\") pod \"kube-apiserver-172-236-108-127\" (UID: \"6ecc4feaa59efec8b1079b4a216a5d5e\") " pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:40.626587 kubelet[2724]: I0123 00:58:40.626528 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ecc4feaa59efec8b1079b4a216a5d5e-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-108-127\" (UID: \"6ecc4feaa59efec8b1079b4a216a5d5e\") " pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:40.645568 sudo[2763]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 00:58:40.646009 sudo[2763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 00:58:40.849026 kubelet[2724]: E0123 00:58:40.848010 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:40.849620 kubelet[2724]: E0123 00:58:40.849603 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:40.850864 kubelet[2724]: E0123 00:58:40.850848 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:40.992815 sudo[2763]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:41.390513 kubelet[2724]: I0123 00:58:41.390280 2724 apiserver.go:52] "Watching apiserver" Jan 23 00:58:41.421218 kubelet[2724]: I0123 00:58:41.421177 2724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:58:41.464677 kubelet[2724]: I0123 00:58:41.464650 2724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:41.465036 kubelet[2724]: E0123 00:58:41.464985 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:41.465287 kubelet[2724]: I0123 00:58:41.465269 2724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:41.472757 kubelet[2724]: E0123 00:58:41.472736 2724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-108-127\" already exists" pod="kube-system/kube-scheduler-172-236-108-127" Jan 23 00:58:41.473215 kubelet[2724]: E0123 00:58:41.473189 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:41.474452 kubelet[2724]: E0123 00:58:41.474379 2724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-108-127\" already exists" pod="kube-system/kube-apiserver-172-236-108-127" Jan 23 00:58:41.474734 kubelet[2724]: E0123 00:58:41.474623 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:41.484854 kubelet[2724]: I0123 00:58:41.484743 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-108-127" podStartSLOduration=1.484732106 podStartE2EDuration="1.484732106s" podCreationTimestamp="2026-01-23 00:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:41.483799308 +0000 UTC m=+1.171886843" watchObservedRunningTime="2026-01-23 00:58:41.484732106 +0000 UTC m=+1.172819641" Jan 23 00:58:41.491000 kubelet[2724]: I0123 00:58:41.490953 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-108-127" podStartSLOduration=1.490942755 podStartE2EDuration="1.490942755s" podCreationTimestamp="2026-01-23 00:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:41.490543467 +0000 UTC m=+1.178631002" watchObservedRunningTime="2026-01-23 00:58:41.490942755 +0000 UTC m=+1.179030300" Jan 23 00:58:41.498144 kubelet[2724]: I0123 00:58:41.497999 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-108-127" podStartSLOduration=1.497991131 podStartE2EDuration="1.497991131s" podCreationTimestamp="2026-01-23 00:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:41.496681988 +0000 UTC m=+1.184769523" watchObservedRunningTime="2026-01-23 00:58:41.497991131 +0000 UTC m=+1.186078666" Jan 23 00:58:42.331721 sudo[1791]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:42.352825 sshd[1790]: Connection closed by 68.220.241.50 port 42310 Jan 23 00:58:42.353336 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:42.358939 systemd[1]: sshd@6-172.236.108.127:22-68.220.241.50:42310.service: Deactivated successfully. Jan 23 00:58:42.359310 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:58:42.361533 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:58:42.361844 systemd[1]: session-7.scope: Consumed 4.024s CPU time, 274.7M memory peak. Jan 23 00:58:42.364515 systemd-logind[1528]: Removed session 7. Jan 23 00:58:42.466495 kubelet[2724]: E0123 00:58:42.466454 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:42.466927 kubelet[2724]: E0123 00:58:42.466907 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:43.468117 kubelet[2724]: E0123 00:58:43.468088 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:45.165716 kubelet[2724]: I0123 00:58:45.165668 2724 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:58:45.166174 containerd[1552]: time="2026-01-23T00:58:45.166138098Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:58:45.166486 kubelet[2724]: I0123 00:58:45.166327 2724 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:58:45.361420 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 00:58:46.286199 systemd[1]: Created slice kubepods-burstable-podacf5fbbc_7e70_4688_8d5b_65681258ce02.slice - libcontainer container kubepods-burstable-podacf5fbbc_7e70_4688_8d5b_65681258ce02.slice. Jan 23 00:58:46.296695 systemd[1]: Created slice kubepods-besteffort-pod7561b25c_e7f2_4110_8fbc_2597efe3e1ac.slice - libcontainer container kubepods-besteffort-pod7561b25c_e7f2_4110_8fbc_2597efe3e1ac.slice. Jan 23 00:58:46.363915 kubelet[2724]: I0123 00:58:46.363678 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7561b25c-e7f2-4110-8fbc-2597efe3e1ac-lib-modules\") pod \"kube-proxy-rjd48\" (UID: \"7561b25c-e7f2-4110-8fbc-2597efe3e1ac\") " pod="kube-system/kube-proxy-rjd48" Jan 23 00:58:46.363915 kubelet[2724]: I0123 00:58:46.363715 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9fp\" (UniqueName: \"kubernetes.io/projected/7561b25c-e7f2-4110-8fbc-2597efe3e1ac-kube-api-access-8d9fp\") pod \"kube-proxy-rjd48\" (UID: \"7561b25c-e7f2-4110-8fbc-2597efe3e1ac\") " pod="kube-system/kube-proxy-rjd48" Jan 23 00:58:46.363915 kubelet[2724]: I0123 00:58:46.363738 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acf5fbbc-7e70-4688-8d5b-65681258ce02-clustermesh-secrets\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.363915 kubelet[2724]: I0123 00:58:46.363753 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-config-path\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.363915 kubelet[2724]: I0123 00:58:46.363769 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7561b25c-e7f2-4110-8fbc-2597efe3e1ac-kube-proxy\") pod \"kube-proxy-rjd48\" (UID: \"7561b25c-e7f2-4110-8fbc-2597efe3e1ac\") " pod="kube-system/kube-proxy-rjd48" Jan 23 00:58:46.364511 kubelet[2724]: I0123 00:58:46.363790 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-run\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364511 kubelet[2724]: I0123 00:58:46.363812 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-hostproc\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364511 kubelet[2724]: I0123 00:58:46.363833 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-lib-modules\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364511 kubelet[2724]: I0123 00:58:46.363856 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-net\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364511 kubelet[2724]: I0123 00:58:46.363886 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-hubble-tls\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364511 kubelet[2724]: I0123 00:58:46.363916 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-bpf-maps\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364861 kubelet[2724]: I0123 00:58:46.363951 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cni-path\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364861 kubelet[2724]: I0123 00:58:46.363966 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-xtables-lock\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364861 kubelet[2724]: I0123 00:58:46.363981 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-kernel\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364861 kubelet[2724]: I0123 00:58:46.363996 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-cgroup\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364861 kubelet[2724]: I0123 00:58:46.364043 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-etc-cni-netd\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364861 kubelet[2724]: I0123 00:58:46.364063 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88tsg\" (UniqueName: \"kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-kube-api-access-88tsg\") pod \"cilium-xpm69\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " pod="kube-system/cilium-xpm69" Jan 23 00:58:46.364998 kubelet[2724]: I0123 00:58:46.364079 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7561b25c-e7f2-4110-8fbc-2597efe3e1ac-xtables-lock\") pod \"kube-proxy-rjd48\" (UID: \"7561b25c-e7f2-4110-8fbc-2597efe3e1ac\") " pod="kube-system/kube-proxy-rjd48" Jan 23 00:58:46.442347 systemd[1]: Created slice kubepods-besteffort-podfb8ce8a8_7bc0_4639_8381_713ffe588ba9.slice - libcontainer container kubepods-besteffort-podfb8ce8a8_7bc0_4639_8381_713ffe588ba9.slice. Jan 23 00:58:46.465104 kubelet[2724]: I0123 00:58:46.465079 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nckkl\" (UniqueName: \"kubernetes.io/projected/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-kube-api-access-nckkl\") pod \"cilium-operator-6f9c7c5859-xr59n\" (UID: \"fb8ce8a8-7bc0-4639-8381-713ffe588ba9\") " pod="kube-system/cilium-operator-6f9c7c5859-xr59n" Jan 23 00:58:46.465342 kubelet[2724]: I0123 00:58:46.465325 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-xr59n\" (UID: \"fb8ce8a8-7bc0-4639-8381-713ffe588ba9\") " pod="kube-system/cilium-operator-6f9c7c5859-xr59n" Jan 23 00:58:46.595056 kubelet[2724]: E0123 00:58:46.594945 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:46.596722 containerd[1552]: time="2026-01-23T00:58:46.596681580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpm69,Uid:acf5fbbc-7e70-4688-8d5b-65681258ce02,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:46.605275 kubelet[2724]: E0123 00:58:46.605103 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:46.605412 containerd[1552]: time="2026-01-23T00:58:46.605379548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rjd48,Uid:7561b25c-e7f2-4110-8fbc-2597efe3e1ac,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:46.612118 containerd[1552]: time="2026-01-23T00:58:46.612093248Z" level=info msg="connecting to shim a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a" address="unix:///run/containerd/s/897250c28be974f95b178d34af768aeaa529470dffc80be7428ca21c65e38db3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:46.622635 containerd[1552]: time="2026-01-23T00:58:46.622583057Z" level=info msg="connecting to shim c773f811aac631c21b7b0a2c483413b8421b9b404fbb8e26397d78d13a4d4c07" address="unix:///run/containerd/s/1f788bfad1ad3efccac491aa542e26db6b6a697783b7e215eeab1dd4900d540a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:46.649174 systemd[1]: Started cri-containerd-a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a.scope - libcontainer container a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a. Jan 23 00:58:46.655611 systemd[1]: Started cri-containerd-c773f811aac631c21b7b0a2c483413b8421b9b404fbb8e26397d78d13a4d4c07.scope - libcontainer container c773f811aac631c21b7b0a2c483413b8421b9b404fbb8e26397d78d13a4d4c07. Jan 23 00:58:46.688262 containerd[1552]: time="2026-01-23T00:58:46.688187610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpm69,Uid:acf5fbbc-7e70-4688-8d5b-65681258ce02,Namespace:kube-system,Attempt:0,} returns sandbox id \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\"" Jan 23 00:58:46.689834 kubelet[2724]: E0123 00:58:46.688962 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:46.691374 containerd[1552]: time="2026-01-23T00:58:46.691335775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 00:58:46.697097 containerd[1552]: time="2026-01-23T00:58:46.697077286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rjd48,Uid:7561b25c-e7f2-4110-8fbc-2597efe3e1ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c773f811aac631c21b7b0a2c483413b8421b9b404fbb8e26397d78d13a4d4c07\"" Jan 23 00:58:46.697928 kubelet[2724]: E0123 00:58:46.697754 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:46.704709 containerd[1552]: time="2026-01-23T00:58:46.704683594Z" level=info msg="CreateContainer within sandbox \"c773f811aac631c21b7b0a2c483413b8421b9b404fbb8e26397d78d13a4d4c07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:58:46.712356 containerd[1552]: time="2026-01-23T00:58:46.711614552Z" level=info msg="Container 5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:46.722128 containerd[1552]: time="2026-01-23T00:58:46.722105271Z" level=info msg="CreateContainer within sandbox \"c773f811aac631c21b7b0a2c483413b8421b9b404fbb8e26397d78d13a4d4c07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593\"" Jan 23 00:58:46.723394 containerd[1552]: time="2026-01-23T00:58:46.723374639Z" level=info msg="StartContainer for \"5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593\"" Jan 23 00:58:46.725249 containerd[1552]: time="2026-01-23T00:58:46.725226325Z" level=info msg="connecting to shim 5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593" address="unix:///run/containerd/s/1f788bfad1ad3efccac491aa542e26db6b6a697783b7e215eeab1dd4900d540a" protocol=ttrpc version=3 Jan 23 00:58:46.742149 systemd[1]: Started cri-containerd-5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593.scope - libcontainer container 5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593. Jan 23 00:58:46.750230 kubelet[2724]: E0123 00:58:46.750168 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:46.751202 containerd[1552]: time="2026-01-23T00:58:46.751164299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-xr59n,Uid:fb8ce8a8-7bc0-4639-8381-713ffe588ba9,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:46.772135 containerd[1552]: time="2026-01-23T00:58:46.772101184Z" level=info msg="connecting to shim 5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea" address="unix:///run/containerd/s/110dadd67b9be8b98d4b9daf60ad6b6e52a554aac67ed949009fa2494b54f42d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:46.798316 systemd[1]: Started cri-containerd-5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea.scope - libcontainer container 5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea. Jan 23 00:58:46.834440 containerd[1552]: time="2026-01-23T00:58:46.834401822Z" level=info msg="StartContainer for \"5f4a2cd1c3617dd3e8cd4c0bb51fcb4c4db05b229de3b9fa6af4e1011f3f6593\" returns successfully" Jan 23 00:58:46.870250 containerd[1552]: time="2026-01-23T00:58:46.870120381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-xr59n,Uid:fb8ce8a8-7bc0-4639-8381-713ffe588ba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\"" Jan 23 00:58:46.871995 kubelet[2724]: E0123 00:58:46.871959 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:47.486060 kubelet[2724]: E0123 00:58:47.485785 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:47.497995 kubelet[2724]: I0123 00:58:47.497933 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rjd48" podStartSLOduration=1.49791523 podStartE2EDuration="1.49791523s" podCreationTimestamp="2026-01-23 00:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:47.497654008 +0000 UTC m=+7.185741543" watchObservedRunningTime="2026-01-23 00:58:47.49791523 +0000 UTC m=+7.186002775" Jan 23 00:58:50.364499 kubelet[2724]: E0123 00:58:50.364471 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:50.381271 kubelet[2724]: E0123 00:58:50.380281 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:50.493333 kubelet[2724]: E0123 00:58:50.493275 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:50.495585 kubelet[2724]: E0123 00:58:50.495555 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:50.568591 kubelet[2724]: E0123 00:58:50.568557 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:51.496574 kubelet[2724]: E0123 00:58:51.496517 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:51.497289 kubelet[2724]: E0123 00:58:51.497006 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:51.499536 kubelet[2724]: E0123 00:58:51.499520 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:55.776725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204400016.mount: Deactivated successfully. Jan 23 00:58:57.275417 containerd[1552]: time="2026-01-23T00:58:57.275371891Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:57.276399 containerd[1552]: time="2026-01-23T00:58:57.276066307Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 00:58:57.276941 containerd[1552]: time="2026-01-23T00:58:57.276887783Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:57.278215 containerd[1552]: time="2026-01-23T00:58:57.278188087Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.58675667s" Jan 23 00:58:57.278254 containerd[1552]: time="2026-01-23T00:58:57.278218874Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 00:58:57.280244 containerd[1552]: time="2026-01-23T00:58:57.280212474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 00:58:57.282899 containerd[1552]: time="2026-01-23T00:58:57.282869934Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:58:57.293319 containerd[1552]: time="2026-01-23T00:58:57.293217530Z" level=info msg="Container 202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:57.295225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236579593.mount: Deactivated successfully. Jan 23 00:58:57.298764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756953466.mount: Deactivated successfully. Jan 23 00:58:57.299838 containerd[1552]: time="2026-01-23T00:58:57.299760775Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\"" Jan 23 00:58:57.301529 containerd[1552]: time="2026-01-23T00:58:57.301464818Z" level=info msg="StartContainer for \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\"" Jan 23 00:58:57.302576 containerd[1552]: time="2026-01-23T00:58:57.302523012Z" level=info msg="connecting to shim 202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c" address="unix:///run/containerd/s/897250c28be974f95b178d34af768aeaa529470dffc80be7428ca21c65e38db3" protocol=ttrpc version=3 Jan 23 00:58:57.329162 systemd[1]: Started cri-containerd-202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c.scope - libcontainer container 202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c. Jan 23 00:58:57.361405 containerd[1552]: time="2026-01-23T00:58:57.361364503Z" level=info msg="StartContainer for \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\" returns successfully" Jan 23 00:58:57.377675 systemd[1]: cri-containerd-202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c.scope: Deactivated successfully. Jan 23 00:58:57.380714 containerd[1552]: time="2026-01-23T00:58:57.380677226Z" level=info msg="received container exit event container_id:\"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\" id:\"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\" pid:3148 exited_at:{seconds:1769129937 nanos:380105657}" Jan 23 00:58:57.514108 kubelet[2724]: E0123 00:58:57.514070 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:57.531623 containerd[1552]: time="2026-01-23T00:58:57.530967765Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:58:57.538385 containerd[1552]: time="2026-01-23T00:58:57.538344573Z" level=info msg="Container fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:57.550025 containerd[1552]: time="2026-01-23T00:58:57.549963027Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\"" Jan 23 00:58:57.550669 containerd[1552]: time="2026-01-23T00:58:57.550514997Z" level=info msg="StartContainer for \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\"" Jan 23 00:58:57.552255 containerd[1552]: time="2026-01-23T00:58:57.552195677Z" level=info msg="connecting to shim fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800" address="unix:///run/containerd/s/897250c28be974f95b178d34af768aeaa529470dffc80be7428ca21c65e38db3" protocol=ttrpc version=3 Jan 23 00:58:57.575169 systemd[1]: Started cri-containerd-fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800.scope - libcontainer container fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800. Jan 23 00:58:57.609143 containerd[1552]: time="2026-01-23T00:58:57.608623761Z" level=info msg="StartContainer for \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\" returns successfully" Jan 23 00:58:57.627096 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:58:57.627479 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:57.628122 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:58:57.631258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:58:57.631463 systemd[1]: cri-containerd-fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800.scope: Deactivated successfully. Jan 23 00:58:57.635970 containerd[1552]: time="2026-01-23T00:58:57.635940361Z" level=info msg="received container exit event container_id:\"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\" id:\"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\" pid:3192 exited_at:{seconds:1769129937 nanos:634729355}" Jan 23 00:58:57.656942 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:58.289900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c-rootfs.mount: Deactivated successfully. Jan 23 00:58:58.519427 kubelet[2724]: E0123 00:58:58.519381 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:58.527059 containerd[1552]: time="2026-01-23T00:58:58.526345596Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:58:58.537703 containerd[1552]: time="2026-01-23T00:58:58.537664607Z" level=info msg="Container 7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:58.547105 containerd[1552]: time="2026-01-23T00:58:58.547001948Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\"" Jan 23 00:58:58.548423 containerd[1552]: time="2026-01-23T00:58:58.548403759Z" level=info msg="StartContainer for \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\"" Jan 23 00:58:58.549958 containerd[1552]: time="2026-01-23T00:58:58.549899898Z" level=info msg="connecting to shim 7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e" address="unix:///run/containerd/s/897250c28be974f95b178d34af768aeaa529470dffc80be7428ca21c65e38db3" protocol=ttrpc version=3 Jan 23 00:58:58.570196 systemd[1]: Started cri-containerd-7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e.scope - libcontainer container 7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e. Jan 23 00:58:58.646350 containerd[1552]: time="2026-01-23T00:58:58.646298498Z" level=info msg="StartContainer for \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\" returns successfully" Jan 23 00:58:58.649540 systemd[1]: cri-containerd-7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e.scope: Deactivated successfully. Jan 23 00:58:58.651917 containerd[1552]: time="2026-01-23T00:58:58.651885331Z" level=info msg="received container exit event container_id:\"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\" id:\"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\" pid:3241 exited_at:{seconds:1769129938 nanos:651617963}" Jan 23 00:58:58.674729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e-rootfs.mount: Deactivated successfully. Jan 23 00:58:59.459797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183021688.mount: Deactivated successfully. Jan 23 00:58:59.478695 update_engine[1529]: I20260123 00:58:59.477873 1529 update_attempter.cc:509] Updating boot flags... Jan 23 00:58:59.540367 kubelet[2724]: E0123 00:58:59.540323 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:58:59.549322 containerd[1552]: time="2026-01-23T00:58:59.549234069Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:58:59.574130 containerd[1552]: time="2026-01-23T00:58:59.572120035Z" level=info msg="Container 2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:59.589799 containerd[1552]: time="2026-01-23T00:58:59.589751996Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\"" Jan 23 00:58:59.592290 containerd[1552]: time="2026-01-23T00:58:59.592241743Z" level=info msg="StartContainer for \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\"" Jan 23 00:58:59.593065 containerd[1552]: time="2026-01-23T00:58:59.592980193Z" level=info msg="connecting to shim 2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab" address="unix:///run/containerd/s/897250c28be974f95b178d34af768aeaa529470dffc80be7428ca21c65e38db3" protocol=ttrpc version=3 Jan 23 00:58:59.719180 systemd[1]: Started cri-containerd-2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab.scope - libcontainer container 2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab. Jan 23 00:58:59.753810 systemd[1]: cri-containerd-2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab.scope: Deactivated successfully. Jan 23 00:58:59.755119 containerd[1552]: time="2026-01-23T00:58:59.755068730Z" level=info msg="received container exit event container_id:\"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\" id:\"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\" pid:3306 exited_at:{seconds:1769129939 nanos:754534359}" Jan 23 00:58:59.765615 containerd[1552]: time="2026-01-23T00:58:59.765575150Z" level=info msg="StartContainer for \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\" returns successfully" Jan 23 00:59:00.453587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab-rootfs.mount: Deactivated successfully. Jan 23 00:59:00.549522 kubelet[2724]: E0123 00:59:00.549486 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:00.557668 containerd[1552]: time="2026-01-23T00:59:00.557433014Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:59:00.582095 containerd[1552]: time="2026-01-23T00:59:00.580660753Z" level=info msg="Container 8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:00.585308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215282416.mount: Deactivated successfully. Jan 23 00:59:00.600413 containerd[1552]: time="2026-01-23T00:59:00.600371542Z" level=info msg="CreateContainer within sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\"" Jan 23 00:59:00.602202 containerd[1552]: time="2026-01-23T00:59:00.602167085Z" level=info msg="StartContainer for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\"" Jan 23 00:59:00.602978 containerd[1552]: time="2026-01-23T00:59:00.602953180Z" level=info msg="connecting to shim 8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7" address="unix:///run/containerd/s/897250c28be974f95b178d34af768aeaa529470dffc80be7428ca21c65e38db3" protocol=ttrpc version=3 Jan 23 00:59:00.652415 systemd[1]: Started cri-containerd-8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7.scope - libcontainer container 8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7. Jan 23 00:59:00.696128 containerd[1552]: time="2026-01-23T00:59:00.696062592Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:00.697281 containerd[1552]: time="2026-01-23T00:59:00.697238959Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 00:59:00.697920 containerd[1552]: time="2026-01-23T00:59:00.697896634Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:00.699797 containerd[1552]: time="2026-01-23T00:59:00.699762130Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.419519029s" Jan 23 00:59:00.699797 containerd[1552]: time="2026-01-23T00:59:00.699805600Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 00:59:00.703307 containerd[1552]: time="2026-01-23T00:59:00.703265936Z" level=info msg="CreateContainer within sandbox \"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 00:59:00.711401 containerd[1552]: time="2026-01-23T00:59:00.711314441Z" level=info msg="Container 63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:00.716416 containerd[1552]: time="2026-01-23T00:59:00.716253154Z" level=info msg="CreateContainer within sandbox \"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\"" Jan 23 00:59:00.717547 containerd[1552]: time="2026-01-23T00:59:00.717498332Z" level=info msg="StartContainer for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\"" Jan 23 00:59:00.721098 containerd[1552]: time="2026-01-23T00:59:00.721058774Z" level=info msg="connecting to shim 63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c" address="unix:///run/containerd/s/110dadd67b9be8b98d4b9daf60ad6b6e52a554aac67ed949009fa2494b54f42d" protocol=ttrpc version=3 Jan 23 00:59:00.751212 systemd[1]: Started cri-containerd-63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c.scope - libcontainer container 63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c. Jan 23 00:59:00.756609 containerd[1552]: time="2026-01-23T00:59:00.756566553Z" level=info msg="StartContainer for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" returns successfully" Jan 23 00:59:00.875459 containerd[1552]: time="2026-01-23T00:59:00.875376904Z" level=info msg="StartContainer for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" returns successfully" Jan 23 00:59:00.880478 kubelet[2724]: I0123 00:59:00.880421 2724 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 00:59:00.929376 systemd[1]: Created slice kubepods-burstable-pod2b9e63e9_869a_4ed6_baa2_a4eac3c153e6.slice - libcontainer container kubepods-burstable-pod2b9e63e9_869a_4ed6_baa2_a4eac3c153e6.slice. Jan 23 00:59:00.950686 systemd[1]: Created slice kubepods-burstable-pod9b991f58_293a_4b9c_87ec_f39538c5bfd5.slice - libcontainer container kubepods-burstable-pod9b991f58_293a_4b9c_87ec_f39538c5bfd5.slice. Jan 23 00:59:00.962860 kubelet[2724]: I0123 00:59:00.961795 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dw7d\" (UniqueName: \"kubernetes.io/projected/9b991f58-293a-4b9c-87ec-f39538c5bfd5-kube-api-access-2dw7d\") pod \"coredns-66bc5c9577-tzt58\" (UID: \"9b991f58-293a-4b9c-87ec-f39538c5bfd5\") " pod="kube-system/coredns-66bc5c9577-tzt58" Jan 23 00:59:00.963150 kubelet[2724]: I0123 00:59:00.963006 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b991f58-293a-4b9c-87ec-f39538c5bfd5-config-volume\") pod \"coredns-66bc5c9577-tzt58\" (UID: \"9b991f58-293a-4b9c-87ec-f39538c5bfd5\") " pod="kube-system/coredns-66bc5c9577-tzt58" Jan 23 00:59:00.963150 kubelet[2724]: I0123 00:59:00.963068 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dwbf\" (UniqueName: \"kubernetes.io/projected/2b9e63e9-869a-4ed6-baa2-a4eac3c153e6-kube-api-access-5dwbf\") pod \"coredns-66bc5c9577-s7494\" (UID: \"2b9e63e9-869a-4ed6-baa2-a4eac3c153e6\") " pod="kube-system/coredns-66bc5c9577-s7494" Jan 23 00:59:00.963150 kubelet[2724]: I0123 00:59:00.963086 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9e63e9-869a-4ed6-baa2-a4eac3c153e6-config-volume\") pod \"coredns-66bc5c9577-s7494\" (UID: \"2b9e63e9-869a-4ed6-baa2-a4eac3c153e6\") " pod="kube-system/coredns-66bc5c9577-s7494" Jan 23 00:59:01.242583 kubelet[2724]: E0123 00:59:01.242520 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:01.244093 containerd[1552]: time="2026-01-23T00:59:01.244056826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s7494,Uid:2b9e63e9-869a-4ed6-baa2-a4eac3c153e6,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:01.258166 kubelet[2724]: E0123 00:59:01.258117 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:01.259959 containerd[1552]: time="2026-01-23T00:59:01.259919287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tzt58,Uid:9b991f58-293a-4b9c-87ec-f39538c5bfd5,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:01.561092 kubelet[2724]: E0123 00:59:01.560915 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:01.562425 kubelet[2724]: E0123 00:59:01.562393 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:01.727288 kubelet[2724]: I0123 00:59:01.727211 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xpm69" podStartSLOduration=5.13871354 podStartE2EDuration="15.727186426s" podCreationTimestamp="2026-01-23 00:58:46 +0000 UTC" firstStartedPulling="2026-01-23 00:58:46.690742888 +0000 UTC m=+6.378830423" lastFinishedPulling="2026-01-23 00:58:57.279215774 +0000 UTC m=+16.967303309" observedRunningTime="2026-01-23 00:59:01.655132503 +0000 UTC m=+21.343220038" watchObservedRunningTime="2026-01-23 00:59:01.727186426 +0000 UTC m=+21.415273961" Jan 23 00:59:02.565284 kubelet[2724]: E0123 00:59:02.565149 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:02.565284 kubelet[2724]: E0123 00:59:02.565149 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:03.566838 kubelet[2724]: E0123 00:59:03.566753 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:04.298033 systemd-networkd[1427]: cilium_host: Link UP Jan 23 00:59:04.298449 systemd-networkd[1427]: cilium_net: Link UP Jan 23 00:59:04.301536 systemd-networkd[1427]: cilium_net: Gained carrier Jan 23 00:59:04.301782 systemd-networkd[1427]: cilium_host: Gained carrier Jan 23 00:59:04.448482 systemd-networkd[1427]: cilium_vxlan: Link UP Jan 23 00:59:04.448654 systemd-networkd[1427]: cilium_vxlan: Gained carrier Jan 23 00:59:04.568699 kubelet[2724]: E0123 00:59:04.568478 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:04.654130 systemd-networkd[1427]: cilium_net: Gained IPv6LL Jan 23 00:59:04.693065 kernel: NET: Registered PF_ALG protocol family Jan 23 00:59:05.158569 systemd-networkd[1427]: cilium_host: Gained IPv6LL Jan 23 00:59:05.479978 systemd-networkd[1427]: lxc_health: Link UP Jan 23 00:59:05.489441 systemd-networkd[1427]: lxc_health: Gained carrier Jan 23 00:59:05.823047 kernel: eth0: renamed from tmp4ef50 Jan 23 00:59:05.825046 kernel: eth0: renamed from tmp1af1d Jan 23 00:59:05.828867 systemd-networkd[1427]: lxcd1c0c3e0598c: Link UP Jan 23 00:59:05.829775 systemd-networkd[1427]: lxcbfca1899e6a4: Link UP Jan 23 00:59:05.833768 systemd-networkd[1427]: lxcd1c0c3e0598c: Gained carrier Jan 23 00:59:05.834569 systemd-networkd[1427]: lxcbfca1899e6a4: Gained carrier Jan 23 00:59:06.181200 systemd-networkd[1427]: cilium_vxlan: Gained IPv6LL Jan 23 00:59:06.595841 kubelet[2724]: E0123 00:59:06.595787 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:06.621854 kubelet[2724]: I0123 00:59:06.621731 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-xr59n" podStartSLOduration=6.794340862 podStartE2EDuration="20.62157181s" podCreationTimestamp="2026-01-23 00:58:46 +0000 UTC" firstStartedPulling="2026-01-23 00:58:46.873594496 +0000 UTC m=+6.561682031" lastFinishedPulling="2026-01-23 00:59:00.700825444 +0000 UTC m=+20.388912979" observedRunningTime="2026-01-23 00:59:01.730521898 +0000 UTC m=+21.418609433" watchObservedRunningTime="2026-01-23 00:59:06.62157181 +0000 UTC m=+26.309659355" Jan 23 00:59:06.949247 systemd-networkd[1427]: lxc_health: Gained IPv6LL Jan 23 00:59:07.525322 systemd-networkd[1427]: lxcbfca1899e6a4: Gained IPv6LL Jan 23 00:59:07.578296 kubelet[2724]: E0123 00:59:07.578244 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:07.653261 systemd-networkd[1427]: lxcd1c0c3e0598c: Gained IPv6LL Jan 23 00:59:09.532453 containerd[1552]: time="2026-01-23T00:59:09.532106431Z" level=info msg="connecting to shim 4ef50df5064cd26f008846ef5b5f0b9f4f3490dda6e0941140fbc90be63c98ff" address="unix:///run/containerd/s/5ea39742312858ea7c6e7390f0f99d9a711bcc97dd97aa8660e8a906c14ec5cd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:09.536463 containerd[1552]: time="2026-01-23T00:59:09.536419791Z" level=info msg="connecting to shim 1af1de44663159580d251f3db50a8671f03b0706e5592c9314307e9092661739" address="unix:///run/containerd/s/a29e802f966426f526d826f9532662a9641b52d19fb66fa9e8f81756e629b30e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:09.581490 systemd[1]: Started cri-containerd-4ef50df5064cd26f008846ef5b5f0b9f4f3490dda6e0941140fbc90be63c98ff.scope - libcontainer container 4ef50df5064cd26f008846ef5b5f0b9f4f3490dda6e0941140fbc90be63c98ff. Jan 23 00:59:09.595505 systemd[1]: Started cri-containerd-1af1de44663159580d251f3db50a8671f03b0706e5592c9314307e9092661739.scope - libcontainer container 1af1de44663159580d251f3db50a8671f03b0706e5592c9314307e9092661739. Jan 23 00:59:09.674429 containerd[1552]: time="2026-01-23T00:59:09.674385638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s7494,Uid:2b9e63e9-869a-4ed6-baa2-a4eac3c153e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef50df5064cd26f008846ef5b5f0b9f4f3490dda6e0941140fbc90be63c98ff\"" Jan 23 00:59:09.676258 kubelet[2724]: E0123 00:59:09.676232 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:09.683758 containerd[1552]: time="2026-01-23T00:59:09.683732558Z" level=info msg="CreateContainer within sandbox \"4ef50df5064cd26f008846ef5b5f0b9f4f3490dda6e0941140fbc90be63c98ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:59:09.700528 containerd[1552]: time="2026-01-23T00:59:09.699027015Z" level=info msg="Container 3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:09.711607 containerd[1552]: time="2026-01-23T00:59:09.711536856Z" level=info msg="CreateContainer within sandbox \"4ef50df5064cd26f008846ef5b5f0b9f4f3490dda6e0941140fbc90be63c98ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654\"" Jan 23 00:59:09.712710 containerd[1552]: time="2026-01-23T00:59:09.712657265Z" level=info msg="StartContainer for \"3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654\"" Jan 23 00:59:09.714463 containerd[1552]: time="2026-01-23T00:59:09.714443968Z" level=info msg="connecting to shim 3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654" address="unix:///run/containerd/s/5ea39742312858ea7c6e7390f0f99d9a711bcc97dd97aa8660e8a906c14ec5cd" protocol=ttrpc version=3 Jan 23 00:59:09.738232 systemd[1]: Started cri-containerd-3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654.scope - libcontainer container 3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654. Jan 23 00:59:09.743961 containerd[1552]: time="2026-01-23T00:59:09.743923385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tzt58,Uid:9b991f58-293a-4b9c-87ec-f39538c5bfd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1af1de44663159580d251f3db50a8671f03b0706e5592c9314307e9092661739\"" Jan 23 00:59:09.745849 kubelet[2724]: E0123 00:59:09.745158 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:09.749355 containerd[1552]: time="2026-01-23T00:59:09.749336409Z" level=info msg="CreateContainer within sandbox \"1af1de44663159580d251f3db50a8671f03b0706e5592c9314307e9092661739\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:59:09.758303 containerd[1552]: time="2026-01-23T00:59:09.758281938Z" level=info msg="Container 7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:09.766083 containerd[1552]: time="2026-01-23T00:59:09.766059630Z" level=info msg="CreateContainer within sandbox \"1af1de44663159580d251f3db50a8671f03b0706e5592c9314307e9092661739\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013\"" Jan 23 00:59:09.766980 containerd[1552]: time="2026-01-23T00:59:09.766938788Z" level=info msg="StartContainer for \"7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013\"" Jan 23 00:59:09.770105 containerd[1552]: time="2026-01-23T00:59:09.770065947Z" level=info msg="connecting to shim 7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013" address="unix:///run/containerd/s/a29e802f966426f526d826f9532662a9641b52d19fb66fa9e8f81756e629b30e" protocol=ttrpc version=3 Jan 23 00:59:09.798041 containerd[1552]: time="2026-01-23T00:59:09.797283837Z" level=info msg="StartContainer for \"3a15f61b9797d96fa6a51afc51764e79cdc80dc6dd514e8fdb33fd90257ef654\" returns successfully" Jan 23 00:59:09.808152 systemd[1]: Started cri-containerd-7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013.scope - libcontainer container 7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013. Jan 23 00:59:09.859733 containerd[1552]: time="2026-01-23T00:59:09.859616726Z" level=info msg="StartContainer for \"7a4feb859ba85874237725e15353078b1589445e9b65c7ff0dddd72b697d2013\" returns successfully" Jan 23 00:59:10.520680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816751176.mount: Deactivated successfully. Jan 23 00:59:10.586268 kubelet[2724]: E0123 00:59:10.585474 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:10.589995 kubelet[2724]: E0123 00:59:10.589964 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:10.602187 kubelet[2724]: I0123 00:59:10.601434 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s7494" podStartSLOduration=24.601418673 podStartE2EDuration="24.601418673s" podCreationTimestamp="2026-01-23 00:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:59:10.600316011 +0000 UTC m=+30.288403546" watchObservedRunningTime="2026-01-23 00:59:10.601418673 +0000 UTC m=+30.289506208" Jan 23 00:59:10.666893 kubelet[2724]: I0123 00:59:10.666825 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tzt58" podStartSLOduration=24.666810368 podStartE2EDuration="24.666810368s" podCreationTimestamp="2026-01-23 00:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:59:10.666729635 +0000 UTC m=+30.354817170" watchObservedRunningTime="2026-01-23 00:59:10.666810368 +0000 UTC m=+30.354897903" Jan 23 00:59:11.592104 kubelet[2724]: E0123 00:59:11.592042 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:11.592824 kubelet[2724]: E0123 00:59:11.592202 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:12.594227 kubelet[2724]: E0123 00:59:12.594144 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:12.594227 kubelet[2724]: E0123 00:59:12.594171 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 00:59:48.439602 kubelet[2724]: E0123 00:59:48.439187 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:06.440852 kubelet[2724]: E0123 01:00:06.439531 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:10.440688 kubelet[2724]: E0123 01:00:10.440186 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:20.439949 kubelet[2724]: E0123 01:00:20.439374 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:22.439161 kubelet[2724]: E0123 01:00:22.439124 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:23.438781 kubelet[2724]: E0123 01:00:23.438746 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:32.439943 kubelet[2724]: E0123 01:00:32.439526 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:33.439694 kubelet[2724]: E0123 01:00:33.439656 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:36.000842 systemd[1]: Started sshd@7-172.236.108.127:22-68.220.241.50:46276.service - OpenSSH per-connection server daemon (68.220.241.50:46276). Jan 23 01:00:36.173357 sshd[4056]: Accepted publickey for core from 68.220.241.50 port 46276 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:36.175375 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:36.182094 systemd-logind[1528]: New session 8 of user core. Jan 23 01:00:36.187353 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:00:36.384797 sshd[4059]: Connection closed by 68.220.241.50 port 46276 Jan 23 01:00:36.386240 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:36.391958 systemd[1]: sshd@7-172.236.108.127:22-68.220.241.50:46276.service: Deactivated successfully. Jan 23 01:00:36.394540 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:00:36.395479 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:00:36.398316 systemd-logind[1528]: Removed session 8. Jan 23 01:00:41.430924 systemd[1]: Started sshd@8-172.236.108.127:22-68.220.241.50:46286.service - OpenSSH per-connection server daemon (68.220.241.50:46286). Jan 23 01:00:41.610828 sshd[4075]: Accepted publickey for core from 68.220.241.50 port 46286 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:41.613821 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:41.620200 systemd-logind[1528]: New session 9 of user core. Jan 23 01:00:41.633238 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:00:41.819230 sshd[4078]: Connection closed by 68.220.241.50 port 46286 Jan 23 01:00:41.820479 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:41.825782 systemd[1]: sshd@8-172.236.108.127:22-68.220.241.50:46286.service: Deactivated successfully. Jan 23 01:00:41.827924 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:00:41.829774 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:00:41.831493 systemd-logind[1528]: Removed session 9. Jan 23 01:00:46.848806 systemd[1]: Started sshd@9-172.236.108.127:22-68.220.241.50:40524.service - OpenSSH per-connection server daemon (68.220.241.50:40524). Jan 23 01:00:47.016173 sshd[4091]: Accepted publickey for core from 68.220.241.50 port 40524 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:47.017871 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:47.023083 systemd-logind[1528]: New session 10 of user core. Jan 23 01:00:47.033173 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:00:47.199001 sshd[4094]: Connection closed by 68.220.241.50 port 40524 Jan 23 01:00:47.199252 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:47.203545 systemd[1]: sshd@9-172.236.108.127:22-68.220.241.50:40524.service: Deactivated successfully. Jan 23 01:00:47.206249 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:00:47.207266 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:00:47.208960 systemd-logind[1528]: Removed session 10. Jan 23 01:00:52.237236 systemd[1]: Started sshd@10-172.236.108.127:22-68.220.241.50:40526.service - OpenSSH per-connection server daemon (68.220.241.50:40526). Jan 23 01:00:52.401351 sshd[4109]: Accepted publickey for core from 68.220.241.50 port 40526 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:52.403782 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:52.408365 systemd-logind[1528]: New session 11 of user core. Jan 23 01:00:52.413174 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:00:52.587554 sshd[4112]: Connection closed by 68.220.241.50 port 40526 Jan 23 01:00:52.589229 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:52.594437 systemd[1]: sshd@10-172.236.108.127:22-68.220.241.50:40526.service: Deactivated successfully. Jan 23 01:00:52.596752 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:00:52.597679 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:00:52.599595 systemd-logind[1528]: Removed session 11. Jan 23 01:00:52.623884 systemd[1]: Started sshd@11-172.236.108.127:22-68.220.241.50:38046.service - OpenSSH per-connection server daemon (68.220.241.50:38046). Jan 23 01:00:52.795665 sshd[4124]: Accepted publickey for core from 68.220.241.50 port 38046 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:52.797418 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:52.803367 systemd-logind[1528]: New session 12 of user core. Jan 23 01:00:52.808166 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:00:53.030347 sshd[4127]: Connection closed by 68.220.241.50 port 38046 Jan 23 01:00:53.030685 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:53.037272 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:00:53.039511 systemd[1]: sshd@11-172.236.108.127:22-68.220.241.50:38046.service: Deactivated successfully. Jan 23 01:00:53.044606 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:00:53.047987 systemd-logind[1528]: Removed session 12. Jan 23 01:00:53.060071 systemd[1]: Started sshd@12-172.236.108.127:22-68.220.241.50:38050.service - OpenSSH per-connection server daemon (68.220.241.50:38050). Jan 23 01:00:53.230074 sshd[4137]: Accepted publickey for core from 68.220.241.50 port 38050 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:53.231568 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:53.237307 systemd-logind[1528]: New session 13 of user core. Jan 23 01:00:53.241192 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:00:53.422886 sshd[4140]: Connection closed by 68.220.241.50 port 38050 Jan 23 01:00:53.424055 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:53.428367 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:00:53.429460 systemd[1]: sshd@12-172.236.108.127:22-68.220.241.50:38050.service: Deactivated successfully. Jan 23 01:00:53.431514 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:00:53.433924 systemd-logind[1528]: Removed session 13. Jan 23 01:00:57.439213 kubelet[2724]: E0123 01:00:57.439172 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:00:58.456368 systemd[1]: Started sshd@13-172.236.108.127:22-68.220.241.50:38058.service - OpenSSH per-connection server daemon (68.220.241.50:38058). Jan 23 01:00:58.617064 sshd[4152]: Accepted publickey for core from 68.220.241.50 port 38058 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:00:58.618092 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:58.623046 systemd-logind[1528]: New session 14 of user core. Jan 23 01:00:58.631186 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:00:58.802316 sshd[4155]: Connection closed by 68.220.241.50 port 38058 Jan 23 01:00:58.803309 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:58.809234 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:00:58.809566 systemd[1]: sshd@13-172.236.108.127:22-68.220.241.50:38058.service: Deactivated successfully. Jan 23 01:00:58.812104 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:00:58.814946 systemd-logind[1528]: Removed session 14. Jan 23 01:01:03.845068 systemd[1]: Started sshd@14-172.236.108.127:22-68.220.241.50:37662.service - OpenSSH per-connection server daemon (68.220.241.50:37662). Jan 23 01:01:04.038857 sshd[4167]: Accepted publickey for core from 68.220.241.50 port 37662 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:04.039591 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:04.045934 systemd-logind[1528]: New session 15 of user core. Jan 23 01:01:04.053176 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:01:04.239479 sshd[4170]: Connection closed by 68.220.241.50 port 37662 Jan 23 01:01:04.240002 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:04.245114 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:01:04.245344 systemd[1]: sshd@14-172.236.108.127:22-68.220.241.50:37662.service: Deactivated successfully. Jan 23 01:01:04.248099 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:01:04.249677 systemd-logind[1528]: Removed session 15. Jan 23 01:01:04.271239 systemd[1]: Started sshd@15-172.236.108.127:22-68.220.241.50:37674.service - OpenSSH per-connection server daemon (68.220.241.50:37674). Jan 23 01:01:04.438852 sshd[4182]: Accepted publickey for core from 68.220.241.50 port 37674 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:04.440827 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:04.447249 systemd-logind[1528]: New session 16 of user core. Jan 23 01:01:04.451205 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:01:04.829359 sshd[4185]: Connection closed by 68.220.241.50 port 37674 Jan 23 01:01:04.831184 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:04.835421 systemd[1]: sshd@15-172.236.108.127:22-68.220.241.50:37674.service: Deactivated successfully. Jan 23 01:01:04.838661 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:01:04.839914 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:01:04.842735 systemd-logind[1528]: Removed session 16. Jan 23 01:01:04.870626 systemd[1]: Started sshd@16-172.236.108.127:22-68.220.241.50:37682.service - OpenSSH per-connection server daemon (68.220.241.50:37682). Jan 23 01:01:05.054302 sshd[4195]: Accepted publickey for core from 68.220.241.50 port 37682 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:05.055895 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:05.062074 systemd-logind[1528]: New session 17 of user core. Jan 23 01:01:05.067215 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:01:05.691566 sshd[4198]: Connection closed by 68.220.241.50 port 37682 Jan 23 01:01:05.692210 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:05.696732 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:01:05.698305 systemd[1]: sshd@16-172.236.108.127:22-68.220.241.50:37682.service: Deactivated successfully. Jan 23 01:01:05.701274 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:01:05.703900 systemd-logind[1528]: Removed session 17. Jan 23 01:01:05.723328 systemd[1]: Started sshd@17-172.236.108.127:22-68.220.241.50:37694.service - OpenSSH per-connection server daemon (68.220.241.50:37694). Jan 23 01:01:05.903747 sshd[4214]: Accepted publickey for core from 68.220.241.50 port 37694 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:05.905589 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:05.911883 systemd-logind[1528]: New session 18 of user core. Jan 23 01:01:05.919158 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:01:06.209127 sshd[4217]: Connection closed by 68.220.241.50 port 37694 Jan 23 01:01:06.211106 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:06.215406 systemd[1]: sshd@17-172.236.108.127:22-68.220.241.50:37694.service: Deactivated successfully. Jan 23 01:01:06.217367 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:01:06.218206 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:01:06.219863 systemd-logind[1528]: Removed session 18. Jan 23 01:01:06.245276 systemd[1]: Started sshd@18-172.236.108.127:22-68.220.241.50:37700.service - OpenSSH per-connection server daemon (68.220.241.50:37700). Jan 23 01:01:06.438060 sshd[4227]: Accepted publickey for core from 68.220.241.50 port 37700 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:06.440835 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:06.450425 systemd-logind[1528]: New session 19 of user core. Jan 23 01:01:06.456212 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:01:06.641439 sshd[4230]: Connection closed by 68.220.241.50 port 37700 Jan 23 01:01:06.642256 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:06.647379 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:01:06.647642 systemd[1]: sshd@18-172.236.108.127:22-68.220.241.50:37700.service: Deactivated successfully. Jan 23 01:01:06.649883 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:01:06.651969 systemd-logind[1528]: Removed session 19. Jan 23 01:01:07.438874 kubelet[2724]: E0123 01:01:07.438633 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:11.680632 systemd[1]: Started sshd@19-172.236.108.127:22-68.220.241.50:37716.service - OpenSSH per-connection server daemon (68.220.241.50:37716). Jan 23 01:01:11.881066 sshd[4247]: Accepted publickey for core from 68.220.241.50 port 37716 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:11.883197 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:11.888418 systemd-logind[1528]: New session 20 of user core. Jan 23 01:01:11.896161 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:01:12.087509 sshd[4250]: Connection closed by 68.220.241.50 port 37716 Jan 23 01:01:12.088275 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:12.093855 systemd[1]: sshd@19-172.236.108.127:22-68.220.241.50:37716.service: Deactivated successfully. Jan 23 01:01:12.093876 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:01:12.096805 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:01:12.098587 systemd-logind[1528]: Removed session 20. Jan 23 01:01:14.440041 kubelet[2724]: E0123 01:01:14.439343 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:17.121415 systemd[1]: Started sshd@20-172.236.108.127:22-68.220.241.50:34716.service - OpenSSH per-connection server daemon (68.220.241.50:34716). Jan 23 01:01:17.298819 sshd[4264]: Accepted publickey for core from 68.220.241.50 port 34716 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:17.300977 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:17.307838 systemd-logind[1528]: New session 21 of user core. Jan 23 01:01:17.316155 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:01:17.493211 sshd[4267]: Connection closed by 68.220.241.50 port 34716 Jan 23 01:01:17.493807 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:17.501810 systemd[1]: sshd@20-172.236.108.127:22-68.220.241.50:34716.service: Deactivated successfully. Jan 23 01:01:17.504748 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:01:17.507350 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:01:17.509072 systemd-logind[1528]: Removed session 21. Jan 23 01:01:17.526478 systemd[1]: Started sshd@21-172.236.108.127:22-68.220.241.50:34728.service - OpenSSH per-connection server daemon (68.220.241.50:34728). Jan 23 01:01:17.689071 sshd[4279]: Accepted publickey for core from 68.220.241.50 port 34728 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:17.690709 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:17.697638 systemd-logind[1528]: New session 22 of user core. Jan 23 01:01:17.701144 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:01:19.081713 containerd[1552]: time="2026-01-23T01:01:19.081669407Z" level=info msg="StopContainer for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" with timeout 30 (s)" Jan 23 01:01:19.083272 containerd[1552]: time="2026-01-23T01:01:19.083182310Z" level=info msg="Stop container \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" with signal terminated" Jan 23 01:01:19.120294 containerd[1552]: time="2026-01-23T01:01:19.120232361Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:01:19.120821 systemd[1]: cri-containerd-63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c.scope: Deactivated successfully. Jan 23 01:01:19.124854 containerd[1552]: time="2026-01-23T01:01:19.124799122Z" level=info msg="received container exit event container_id:\"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" id:\"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" pid:3382 exited_at:{seconds:1769130079 nanos:122921956}" Jan 23 01:01:19.138580 containerd[1552]: time="2026-01-23T01:01:19.138538457Z" level=info msg="StopContainer for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" with timeout 2 (s)" Jan 23 01:01:19.139380 containerd[1552]: time="2026-01-23T01:01:19.139288083Z" level=info msg="Stop container \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" with signal terminated" Jan 23 01:01:19.151143 systemd-networkd[1427]: lxc_health: Link DOWN Jan 23 01:01:19.151153 systemd-networkd[1427]: lxc_health: Lost carrier Jan 23 01:01:19.162519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c-rootfs.mount: Deactivated successfully. Jan 23 01:01:19.178363 containerd[1552]: time="2026-01-23T01:01:19.178323316Z" level=info msg="StopContainer for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" returns successfully" Jan 23 01:01:19.179624 systemd[1]: cri-containerd-8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7.scope: Deactivated successfully. Jan 23 01:01:19.179955 systemd[1]: cri-containerd-8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7.scope: Consumed 7.150s CPU time, 125.4M memory peak, 116K read from disk, 13.3M written to disk. Jan 23 01:01:19.180874 containerd[1552]: time="2026-01-23T01:01:19.180822980Z" level=info msg="StopPodSandbox for \"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\"" Jan 23 01:01:19.180923 containerd[1552]: time="2026-01-23T01:01:19.180877003Z" level=info msg="Container to stop \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:01:19.183046 containerd[1552]: time="2026-01-23T01:01:19.182998354Z" level=info msg="received container exit event container_id:\"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" id:\"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" pid:3350 exited_at:{seconds:1769130079 nanos:182348214}" Jan 23 01:01:19.204378 containerd[1552]: time="2026-01-23T01:01:19.203635184Z" level=info msg="received sandbox exit event container_id:\"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" id:\"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" exit_status:137 exited_at:{seconds:1769130079 nanos:203500926}" monitor_name=podsandbox Jan 23 01:01:19.203976 systemd[1]: cri-containerd-5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea.scope: Deactivated successfully. Jan 23 01:01:19.221463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7-rootfs.mount: Deactivated successfully. Jan 23 01:01:19.231609 containerd[1552]: time="2026-01-23T01:01:19.231538082Z" level=info msg="StopContainer for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" returns successfully" Jan 23 01:01:19.233952 containerd[1552]: time="2026-01-23T01:01:19.233885766Z" level=info msg="StopPodSandbox for \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\"" Jan 23 01:01:19.233952 containerd[1552]: time="2026-01-23T01:01:19.233945729Z" level=info msg="Container to stop \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:01:19.234606 containerd[1552]: time="2026-01-23T01:01:19.234525485Z" level=info msg="Container to stop \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:01:19.234606 containerd[1552]: time="2026-01-23T01:01:19.234552067Z" level=info msg="Container to stop \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:01:19.234606 containerd[1552]: time="2026-01-23T01:01:19.234565577Z" level=info msg="Container to stop \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:01:19.234606 containerd[1552]: time="2026-01-23T01:01:19.234574128Z" level=info msg="Container to stop \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:01:19.242860 systemd[1]: cri-containerd-a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a.scope: Deactivated successfully. Jan 23 01:01:19.249188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea-rootfs.mount: Deactivated successfully. Jan 23 01:01:19.250985 containerd[1552]: time="2026-01-23T01:01:19.250913544Z" level=info msg="shim disconnected" id=5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea namespace=k8s.io Jan 23 01:01:19.250985 containerd[1552]: time="2026-01-23T01:01:19.250938406Z" level=warning msg="cleaning up after shim disconnected" id=5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea namespace=k8s.io Jan 23 01:01:19.250985 containerd[1552]: time="2026-01-23T01:01:19.250946146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:01:19.254332 containerd[1552]: time="2026-01-23T01:01:19.254306513Z" level=info msg="received sandbox exit event container_id:\"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" id:\"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" exit_status:137 exited_at:{seconds:1769130079 nanos:253987154}" monitor_name=podsandbox Jan 23 01:01:19.271797 containerd[1552]: time="2026-01-23T01:01:19.271764217Z" level=info msg="TearDown network for sandbox \"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" successfully" Jan 23 01:01:19.271797 containerd[1552]: time="2026-01-23T01:01:19.271789549Z" level=info msg="StopPodSandbox for \"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" returns successfully" Jan 23 01:01:19.273179 containerd[1552]: time="2026-01-23T01:01:19.273148572Z" level=info msg="received sandbox container exit event sandbox_id:\"5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea\" exit_status:137 exited_at:{seconds:1769130079 nanos:203500926}" monitor_name=criService Jan 23 01:01:19.273430 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e67273cb4d67653e16e873c89c1355c78f2e23d1407173648ed21147b26fbea-shm.mount: Deactivated successfully. Jan 23 01:01:19.293192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a-rootfs.mount: Deactivated successfully. Jan 23 01:01:19.295514 containerd[1552]: time="2026-01-23T01:01:19.295476207Z" level=info msg="shim disconnected" id=a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a namespace=k8s.io Jan 23 01:01:19.295649 containerd[1552]: time="2026-01-23T01:01:19.295632987Z" level=warning msg="cleaning up after shim disconnected" id=a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a namespace=k8s.io Jan 23 01:01:19.295759 containerd[1552]: time="2026-01-23T01:01:19.295723792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:01:19.317184 containerd[1552]: time="2026-01-23T01:01:19.317083997Z" level=info msg="received sandbox container exit event sandbox_id:\"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" exit_status:137 exited_at:{seconds:1769130079 nanos:253987154}" monitor_name=criService Jan 23 01:01:19.317512 containerd[1552]: time="2026-01-23T01:01:19.317487412Z" level=info msg="TearDown network for sandbox \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" successfully" Jan 23 01:01:19.317584 containerd[1552]: time="2026-01-23T01:01:19.317571317Z" level=info msg="StopPodSandbox for \"a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a\" returns successfully" Jan 23 01:01:19.430714 kubelet[2724]: I0123 01:01:19.429866 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-bpf-maps\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.430714 kubelet[2724]: I0123 01:01:19.429901 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-cgroup\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.430714 kubelet[2724]: I0123 01:01:19.429922 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-etc-cni-netd\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.430714 kubelet[2724]: I0123 01:01:19.429945 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nckkl\" (UniqueName: \"kubernetes.io/projected/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-kube-api-access-nckkl\") pod \"fb8ce8a8-7bc0-4639-8381-713ffe588ba9\" (UID: \"fb8ce8a8-7bc0-4639-8381-713ffe588ba9\") " Jan 23 01:01:19.430714 kubelet[2724]: I0123 01:01:19.429962 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cni-path\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.430714 kubelet[2724]: I0123 01:01:19.429980 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acf5fbbc-7e70-4688-8d5b-65681258ce02-clustermesh-secrets\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.432471 kubelet[2724]: I0123 01:01:19.429995 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-run\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.432471 kubelet[2724]: I0123 01:01:19.430010 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-cilium-config-path\") pod \"fb8ce8a8-7bc0-4639-8381-713ffe588ba9\" (UID: \"fb8ce8a8-7bc0-4639-8381-713ffe588ba9\") " Jan 23 01:01:19.432471 kubelet[2724]: I0123 01:01:19.430055 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-net\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.432471 kubelet[2724]: I0123 01:01:19.430071 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-hostproc\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.432471 kubelet[2724]: I0123 01:01:19.430085 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-xtables-lock\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.432471 kubelet[2724]: I0123 01:01:19.430143 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.432702 kubelet[2724]: I0123 01:01:19.430177 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cni-path" (OuterVolumeSpecName: "cni-path") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434049 kubelet[2724]: I0123 01:01:19.433281 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434233 kubelet[2724]: I0123 01:01:19.434185 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5fbbc-7e70-4688-8d5b-65681258ce02-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:01:19.434297 kubelet[2724]: I0123 01:01:19.434266 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434332 kubelet[2724]: I0123 01:01:19.434303 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-hostproc" (OuterVolumeSpecName: "hostproc") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434332 kubelet[2724]: I0123 01:01:19.434320 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434402 kubelet[2724]: I0123 01:01:19.434333 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434402 kubelet[2724]: I0123 01:01:19.434347 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.434402 kubelet[2724]: I0123 01:01:19.434368 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88tsg\" (UniqueName: \"kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-kube-api-access-88tsg\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.434402 kubelet[2724]: I0123 01:01:19.434389 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-config-path\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.434402 kubelet[2724]: I0123 01:01:19.434403 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-lib-modules\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434418 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-kernel\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434432 2724 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-hubble-tls\") pod \"acf5fbbc-7e70-4688-8d5b-65681258ce02\" (UID: \"acf5fbbc-7e70-4688-8d5b-65681258ce02\") " Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434464 2724 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-bpf-maps\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434472 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-cgroup\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434499 2724 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-etc-cni-netd\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434506 2724 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cni-path\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434556 kubelet[2724]: I0123 01:01:19.434515 2724 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acf5fbbc-7e70-4688-8d5b-65681258ce02-clustermesh-secrets\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434712 kubelet[2724]: I0123 01:01:19.434523 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-run\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434712 kubelet[2724]: I0123 01:01:19.434530 2724 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-net\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434712 kubelet[2724]: I0123 01:01:19.434537 2724 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-hostproc\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.434712 kubelet[2724]: I0123 01:01:19.434545 2724 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-xtables-lock\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.437735 kubelet[2724]: I0123 01:01:19.437677 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.437735 kubelet[2724]: I0123 01:01:19.437709 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:01:19.439697 kubelet[2724]: I0123 01:01:19.439656 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fb8ce8a8-7bc0-4639-8381-713ffe588ba9" (UID: "fb8ce8a8-7bc0-4639-8381-713ffe588ba9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:01:19.440151 kubelet[2724]: I0123 01:01:19.440122 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:01:19.442385 kubelet[2724]: I0123 01:01:19.442146 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-kube-api-access-nckkl" (OuterVolumeSpecName: "kube-api-access-nckkl") pod "fb8ce8a8-7bc0-4639-8381-713ffe588ba9" (UID: "fb8ce8a8-7bc0-4639-8381-713ffe588ba9"). InnerVolumeSpecName "kube-api-access-nckkl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:01:19.442815 kubelet[2724]: I0123 01:01:19.442780 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-kube-api-access-88tsg" (OuterVolumeSpecName: "kube-api-access-88tsg") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "kube-api-access-88tsg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:01:19.443802 kubelet[2724]: I0123 01:01:19.443782 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acf5fbbc-7e70-4688-8d5b-65681258ce02" (UID: "acf5fbbc-7e70-4688-8d5b-65681258ce02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:01:19.535709 kubelet[2724]: I0123 01:01:19.535676 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-cilium-config-path\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.535841 kubelet[2724]: I0123 01:01:19.535720 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-88tsg\" (UniqueName: \"kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-kube-api-access-88tsg\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.535841 kubelet[2724]: I0123 01:01:19.535732 2724 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acf5fbbc-7e70-4688-8d5b-65681258ce02-cilium-config-path\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.535841 kubelet[2724]: I0123 01:01:19.535741 2724 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-lib-modules\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.535841 kubelet[2724]: I0123 01:01:19.535750 2724 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acf5fbbc-7e70-4688-8d5b-65681258ce02-host-proc-sys-kernel\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.535841 kubelet[2724]: I0123 01:01:19.535758 2724 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acf5fbbc-7e70-4688-8d5b-65681258ce02-hubble-tls\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.535841 kubelet[2724]: I0123 01:01:19.535766 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nckkl\" (UniqueName: \"kubernetes.io/projected/fb8ce8a8-7bc0-4639-8381-713ffe588ba9-kube-api-access-nckkl\") on node \"172-236-108-127\" DevicePath \"\"" Jan 23 01:01:19.860209 kubelet[2724]: I0123 01:01:19.859859 2724 scope.go:117] "RemoveContainer" containerID="63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c" Jan 23 01:01:19.864634 containerd[1552]: time="2026-01-23T01:01:19.864602957Z" level=info msg="RemoveContainer for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\"" Jan 23 01:01:19.869910 containerd[1552]: time="2026-01-23T01:01:19.869875531Z" level=info msg="RemoveContainer for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" returns successfully" Jan 23 01:01:19.870274 kubelet[2724]: I0123 01:01:19.870229 2724 scope.go:117] "RemoveContainer" containerID="63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c" Jan 23 01:01:19.870881 containerd[1552]: time="2026-01-23T01:01:19.870419695Z" level=error msg="ContainerStatus for \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\": not found" Jan 23 01:01:19.871207 kubelet[2724]: E0123 01:01:19.871138 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\": not found" containerID="63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c" Jan 23 01:01:19.871207 kubelet[2724]: I0123 01:01:19.871167 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c"} err="failed to get container status \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"63e82f617d06d1f1d77cc5a87fef15ee24cb7315b705de425b01fdd8635ebb1c\": not found" Jan 23 01:01:19.871207 kubelet[2724]: I0123 01:01:19.871194 2724 scope.go:117] "RemoveContainer" containerID="8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7" Jan 23 01:01:19.872236 systemd[1]: Removed slice kubepods-besteffort-podfb8ce8a8_7bc0_4639_8381_713ffe588ba9.slice - libcontainer container kubepods-besteffort-podfb8ce8a8_7bc0_4639_8381_713ffe588ba9.slice. Jan 23 01:01:19.875182 containerd[1552]: time="2026-01-23T01:01:19.875099453Z" level=info msg="RemoveContainer for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\"" Jan 23 01:01:19.879947 containerd[1552]: time="2026-01-23T01:01:19.879921419Z" level=info msg="RemoveContainer for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" returns successfully" Jan 23 01:01:19.880456 kubelet[2724]: I0123 01:01:19.880418 2724 scope.go:117] "RemoveContainer" containerID="2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab" Jan 23 01:01:19.880974 systemd[1]: Removed slice kubepods-burstable-podacf5fbbc_7e70_4688_8d5b_65681258ce02.slice - libcontainer container kubepods-burstable-podacf5fbbc_7e70_4688_8d5b_65681258ce02.slice. Jan 23 01:01:19.881085 systemd[1]: kubepods-burstable-podacf5fbbc_7e70_4688_8d5b_65681258ce02.slice: Consumed 7.265s CPU time, 125.9M memory peak, 116K read from disk, 13.3M written to disk. Jan 23 01:01:19.883850 containerd[1552]: time="2026-01-23T01:01:19.883677911Z" level=info msg="RemoveContainer for \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\"" Jan 23 01:01:19.888910 containerd[1552]: time="2026-01-23T01:01:19.888836159Z" level=info msg="RemoveContainer for \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\" returns successfully" Jan 23 01:01:19.889161 kubelet[2724]: I0123 01:01:19.889106 2724 scope.go:117] "RemoveContainer" containerID="7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e" Jan 23 01:01:19.891967 containerd[1552]: time="2026-01-23T01:01:19.891267958Z" level=info msg="RemoveContainer for \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\"" Jan 23 01:01:19.894524 containerd[1552]: time="2026-01-23T01:01:19.894492996Z" level=info msg="RemoveContainer for \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\" returns successfully" Jan 23 01:01:19.894707 kubelet[2724]: I0123 01:01:19.894640 2724 scope.go:117] "RemoveContainer" containerID="fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800" Jan 23 01:01:19.898934 containerd[1552]: time="2026-01-23T01:01:19.898273479Z" level=info msg="RemoveContainer for \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\"" Jan 23 01:01:19.902373 containerd[1552]: time="2026-01-23T01:01:19.902353810Z" level=info msg="RemoveContainer for \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\" returns successfully" Jan 23 01:01:19.902537 kubelet[2724]: I0123 01:01:19.902522 2724 scope.go:117] "RemoveContainer" containerID="202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c" Jan 23 01:01:19.903865 containerd[1552]: time="2026-01-23T01:01:19.903846752Z" level=info msg="RemoveContainer for \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\"" Jan 23 01:01:19.906053 containerd[1552]: time="2026-01-23T01:01:19.906009895Z" level=info msg="RemoveContainer for \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\" returns successfully" Jan 23 01:01:19.906190 kubelet[2724]: I0123 01:01:19.906143 2724 scope.go:117] "RemoveContainer" containerID="8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7" Jan 23 01:01:19.906371 containerd[1552]: time="2026-01-23T01:01:19.906303484Z" level=error msg="ContainerStatus for \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\": not found" Jan 23 01:01:19.906503 kubelet[2724]: E0123 01:01:19.906469 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\": not found" containerID="8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7" Jan 23 01:01:19.906546 kubelet[2724]: I0123 01:01:19.906498 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7"} err="failed to get container status \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c9cfe0c75c01e505aa68d7960d4236606ec9f2e44ac1e06e88f272dfdf0d4d7\": not found" Jan 23 01:01:19.906546 kubelet[2724]: I0123 01:01:19.906516 2724 scope.go:117] "RemoveContainer" containerID="2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab" Jan 23 01:01:19.906738 containerd[1552]: time="2026-01-23T01:01:19.906684076Z" level=error msg="ContainerStatus for \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\": not found" Jan 23 01:01:19.906871 kubelet[2724]: E0123 01:01:19.906838 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\": not found" containerID="2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab" Jan 23 01:01:19.906871 kubelet[2724]: I0123 01:01:19.906859 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab"} err="failed to get container status \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d92892f8435913fed4078fee0961490c8d18a329f24f9b5e4c1fadeee1365ab\": not found" Jan 23 01:01:19.906969 kubelet[2724]: I0123 01:01:19.906878 2724 scope.go:117] "RemoveContainer" containerID="7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e" Jan 23 01:01:19.907080 containerd[1552]: time="2026-01-23T01:01:19.907010737Z" level=error msg="ContainerStatus for \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\": not found" Jan 23 01:01:19.907149 kubelet[2724]: E0123 01:01:19.907130 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\": not found" containerID="7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e" Jan 23 01:01:19.907182 kubelet[2724]: I0123 01:01:19.907150 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e"} err="failed to get container status \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a59e529da3cf5ff52a21b715d75f941bb96fc9a5ca1ff8c8a252a50abac763e\": not found" Jan 23 01:01:19.907182 kubelet[2724]: I0123 01:01:19.907162 2724 scope.go:117] "RemoveContainer" containerID="fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800" Jan 23 01:01:19.907356 containerd[1552]: time="2026-01-23T01:01:19.907282453Z" level=error msg="ContainerStatus for \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\": not found" Jan 23 01:01:19.907542 kubelet[2724]: E0123 01:01:19.907522 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\": not found" containerID="fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800" Jan 23 01:01:19.907754 kubelet[2724]: I0123 01:01:19.907541 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800"} err="failed to get container status \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd92a06cf8a59b741a410dcd387c460fc5d5b09cfb8385231a6f406c82ef9800\": not found" Jan 23 01:01:19.907754 kubelet[2724]: I0123 01:01:19.907554 2724 scope.go:117] "RemoveContainer" containerID="202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c" Jan 23 01:01:19.907834 containerd[1552]: time="2026-01-23T01:01:19.907680278Z" level=error msg="ContainerStatus for \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\": not found" Jan 23 01:01:19.907948 kubelet[2724]: E0123 01:01:19.907927 2724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\": not found" containerID="202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c" Jan 23 01:01:19.908004 kubelet[2724]: I0123 01:01:19.907982 2724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c"} err="failed to get container status \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"202164513cb7556657021d231edcad6bb359d851e4184fb423d98c2ee44a8b8c\": not found" Jan 23 01:01:20.173880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a567e6ee91c3b0b922f2cc8ae85d20e1ffb204fbc35c227ce0b14679b7217b2a-shm.mount: Deactivated successfully. Jan 23 01:01:20.174008 systemd[1]: var-lib-kubelet-pods-fb8ce8a8\x2d7bc0\x2d4639\x2d8381\x2d713ffe588ba9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnckkl.mount: Deactivated successfully. Jan 23 01:01:20.174110 systemd[1]: var-lib-kubelet-pods-acf5fbbc\x2d7e70\x2d4688\x2d8d5b\x2d65681258ce02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d88tsg.mount: Deactivated successfully. Jan 23 01:01:20.174183 systemd[1]: var-lib-kubelet-pods-acf5fbbc\x2d7e70\x2d4688\x2d8d5b\x2d65681258ce02-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:01:20.174257 systemd[1]: var-lib-kubelet-pods-acf5fbbc\x2d7e70\x2d4688\x2d8d5b\x2d65681258ce02-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:01:20.441792 kubelet[2724]: I0123 01:01:20.441691 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf5fbbc-7e70-4688-8d5b-65681258ce02" path="/var/lib/kubelet/pods/acf5fbbc-7e70-4688-8d5b-65681258ce02/volumes" Jan 23 01:01:20.442644 kubelet[2724]: I0123 01:01:20.442617 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8ce8a8-7bc0-4639-8381-713ffe588ba9" path="/var/lib/kubelet/pods/fb8ce8a8-7bc0-4639-8381-713ffe588ba9/volumes" Jan 23 01:01:20.539437 kubelet[2724]: E0123 01:01:20.539367 2724 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:01:21.052627 sshd[4282]: Connection closed by 68.220.241.50 port 34728 Jan 23 01:01:21.054223 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:21.059783 systemd[1]: sshd@21-172.236.108.127:22-68.220.241.50:34728.service: Deactivated successfully. Jan 23 01:01:21.062444 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:01:21.064083 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:01:21.066337 systemd-logind[1528]: Removed session 22. Jan 23 01:01:21.083646 systemd[1]: Started sshd@22-172.236.108.127:22-68.220.241.50:34732.service - OpenSSH per-connection server daemon (68.220.241.50:34732). Jan 23 01:01:21.252278 sshd[4423]: Accepted publickey for core from 68.220.241.50 port 34732 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:21.254203 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:21.259402 systemd-logind[1528]: New session 23 of user core. Jan 23 01:01:21.270170 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:01:21.994835 sshd[4426]: Connection closed by 68.220.241.50 port 34732 Jan 23 01:01:21.996824 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:22.003253 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:01:22.006945 systemd[1]: sshd@22-172.236.108.127:22-68.220.241.50:34732.service: Deactivated successfully. Jan 23 01:01:22.011636 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:01:22.016625 systemd[1]: Created slice kubepods-burstable-pod87b32976_f174_4792_87c4_de67beca88fe.slice - libcontainer container kubepods-burstable-pod87b32976_f174_4792_87c4_de67beca88fe.slice. Jan 23 01:01:22.032210 systemd[1]: Started sshd@23-172.236.108.127:22-68.220.241.50:34744.service - OpenSSH per-connection server daemon (68.220.241.50:34744). Jan 23 01:01:22.034398 systemd-logind[1528]: Removed session 23. Jan 23 01:01:22.149573 kubelet[2724]: I0123 01:01:22.149201 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4st6\" (UniqueName: \"kubernetes.io/projected/87b32976-f174-4792-87c4-de67beca88fe-kube-api-access-m4st6\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.149573 kubelet[2724]: I0123 01:01:22.149239 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-host-proc-sys-kernel\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.149573 kubelet[2724]: I0123 01:01:22.149258 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-cilium-cgroup\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.149573 kubelet[2724]: I0123 01:01:22.149276 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-xtables-lock\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.149573 kubelet[2724]: I0123 01:01:22.149298 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87b32976-f174-4792-87c4-de67beca88fe-clustermesh-secrets\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150181 kubelet[2724]: I0123 01:01:22.149315 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87b32976-f174-4792-87c4-de67beca88fe-cilium-config-path\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150181 kubelet[2724]: I0123 01:01:22.149331 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-bpf-maps\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150181 kubelet[2724]: I0123 01:01:22.149347 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-hostproc\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150181 kubelet[2724]: I0123 01:01:22.149362 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-lib-modules\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150181 kubelet[2724]: I0123 01:01:22.149403 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/87b32976-f174-4792-87c4-de67beca88fe-cilium-ipsec-secrets\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150181 kubelet[2724]: I0123 01:01:22.149420 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87b32976-f174-4792-87c4-de67beca88fe-hubble-tls\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150341 kubelet[2724]: I0123 01:01:22.149437 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-etc-cni-netd\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150341 kubelet[2724]: I0123 01:01:22.149453 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-cilium-run\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150341 kubelet[2724]: I0123 01:01:22.149468 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-cni-path\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.150341 kubelet[2724]: I0123 01:01:22.149485 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87b32976-f174-4792-87c4-de67beca88fe-host-proc-sys-net\") pod \"cilium-p9hc5\" (UID: \"87b32976-f174-4792-87c4-de67beca88fe\") " pod="kube-system/cilium-p9hc5" Jan 23 01:01:22.201769 sshd[4436]: Accepted publickey for core from 68.220.241.50 port 34744 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:22.203711 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:22.210340 systemd-logind[1528]: New session 24 of user core. Jan 23 01:01:22.217143 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:01:22.327202 sshd[4439]: Connection closed by 68.220.241.50 port 34744 Jan 23 01:01:22.331718 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:22.334341 kubelet[2724]: E0123 01:01:22.334307 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:22.336107 containerd[1552]: time="2026-01-23T01:01:22.335158530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9hc5,Uid:87b32976-f174-4792-87c4-de67beca88fe,Namespace:kube-system,Attempt:0,}" Jan 23 01:01:22.340927 systemd[1]: sshd@23-172.236.108.127:22-68.220.241.50:34744.service: Deactivated successfully. Jan 23 01:01:22.350371 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:01:22.353274 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:01:22.356789 systemd-logind[1528]: Removed session 24. Jan 23 01:01:22.359863 containerd[1552]: time="2026-01-23T01:01:22.359817166Z" level=info msg="connecting to shim 0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2" address="unix:///run/containerd/s/572bf39747003ffe34b4ea906b721a43458e1a58d93d07dfa926fbf0b4048a3d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:22.367044 systemd[1]: Started sshd@24-172.236.108.127:22-68.220.241.50:34756.service - OpenSSH per-connection server daemon (68.220.241.50:34756). Jan 23 01:01:22.386141 systemd[1]: Started cri-containerd-0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2.scope - libcontainer container 0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2. Jan 23 01:01:22.421138 containerd[1552]: time="2026-01-23T01:01:22.421078060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9hc5,Uid:87b32976-f174-4792-87c4-de67beca88fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\"" Jan 23 01:01:22.423189 kubelet[2724]: E0123 01:01:22.423164 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:22.428750 containerd[1552]: time="2026-01-23T01:01:22.428721268Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:01:22.444060 containerd[1552]: time="2026-01-23T01:01:22.442334602Z" level=info msg="Container cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:22.448255 containerd[1552]: time="2026-01-23T01:01:22.448222022Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012\"" Jan 23 01:01:22.448828 containerd[1552]: time="2026-01-23T01:01:22.448794598Z" level=info msg="StartContainer for \"cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012\"" Jan 23 01:01:22.450288 containerd[1552]: time="2026-01-23T01:01:22.450217417Z" level=info msg="connecting to shim cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012" address="unix:///run/containerd/s/572bf39747003ffe34b4ea906b721a43458e1a58d93d07dfa926fbf0b4048a3d" protocol=ttrpc version=3 Jan 23 01:01:22.471163 systemd[1]: Started cri-containerd-cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012.scope - libcontainer container cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012. Jan 23 01:01:22.510610 containerd[1552]: time="2026-01-23T01:01:22.510529301Z" level=info msg="StartContainer for \"cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012\" returns successfully" Jan 23 01:01:22.521472 systemd[1]: cri-containerd-cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012.scope: Deactivated successfully. Jan 23 01:01:22.525040 containerd[1552]: time="2026-01-23T01:01:22.524671468Z" level=info msg="received container exit event container_id:\"cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012\" id:\"cee75f9f57079706335f475717869032de63b4947f7943ec903908b1bf8a0012\" pid:4513 exited_at:{seconds:1769130082 nanos:524306424}" Jan 23 01:01:22.562577 sshd[4471]: Accepted publickey for core from 68.220.241.50 port 34756 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:01:22.564398 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:22.570131 systemd-logind[1528]: New session 25 of user core. Jan 23 01:01:22.574215 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:01:22.878075 kubelet[2724]: E0123 01:01:22.877219 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:22.883635 containerd[1552]: time="2026-01-23T01:01:22.883591242Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:01:22.892624 containerd[1552]: time="2026-01-23T01:01:22.892586917Z" level=info msg="Container 9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:22.898518 containerd[1552]: time="2026-01-23T01:01:22.898458335Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb\"" Jan 23 01:01:22.899309 containerd[1552]: time="2026-01-23T01:01:22.899252595Z" level=info msg="StartContainer for \"9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb\"" Jan 23 01:01:22.900632 containerd[1552]: time="2026-01-23T01:01:22.900588609Z" level=info msg="connecting to shim 9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb" address="unix:///run/containerd/s/572bf39747003ffe34b4ea906b721a43458e1a58d93d07dfa926fbf0b4048a3d" protocol=ttrpc version=3 Jan 23 01:01:22.928213 systemd[1]: Started cri-containerd-9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb.scope - libcontainer container 9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb. Jan 23 01:01:22.959615 containerd[1552]: time="2026-01-23T01:01:22.959572218Z" level=info msg="StartContainer for \"9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb\" returns successfully" Jan 23 01:01:22.967637 systemd[1]: cri-containerd-9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb.scope: Deactivated successfully. Jan 23 01:01:22.968182 containerd[1552]: time="2026-01-23T01:01:22.967867189Z" level=info msg="received container exit event container_id:\"9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb\" id:\"9727bf1a7e215831185ff13e04e6a0e74465d2c78b8b83adc7ed0f23e7677ccb\" pid:4566 exited_at:{seconds:1769130082 nanos:967716329}" Jan 23 01:01:23.686299 kubelet[2724]: I0123 01:01:23.686253 2724 setters.go:543] "Node became not ready" node="172-236-108-127" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:01:23Z","lastTransitionTime":"2026-01-23T01:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:01:23.881649 kubelet[2724]: E0123 01:01:23.881596 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:23.885356 containerd[1552]: time="2026-01-23T01:01:23.885188028Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:01:23.904272 containerd[1552]: time="2026-01-23T01:01:23.904235220Z" level=info msg="Container 0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:23.914721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839459021.mount: Deactivated successfully. Jan 23 01:01:23.919789 containerd[1552]: time="2026-01-23T01:01:23.919750299Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8\"" Jan 23 01:01:23.921255 containerd[1552]: time="2026-01-23T01:01:23.921212871Z" level=info msg="StartContainer for \"0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8\"" Jan 23 01:01:23.922904 containerd[1552]: time="2026-01-23T01:01:23.922875886Z" level=info msg="connecting to shim 0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8" address="unix:///run/containerd/s/572bf39747003ffe34b4ea906b721a43458e1a58d93d07dfa926fbf0b4048a3d" protocol=ttrpc version=3 Jan 23 01:01:23.955204 systemd[1]: Started cri-containerd-0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8.scope - libcontainer container 0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8. Jan 23 01:01:24.024213 containerd[1552]: time="2026-01-23T01:01:24.024165366Z" level=info msg="StartContainer for \"0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8\" returns successfully" Jan 23 01:01:24.025485 systemd[1]: cri-containerd-0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8.scope: Deactivated successfully. Jan 23 01:01:24.027475 containerd[1552]: time="2026-01-23T01:01:24.027452035Z" level=info msg="received container exit event container_id:\"0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8\" id:\"0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8\" pid:4612 exited_at:{seconds:1769130084 nanos:26953423}" Jan 23 01:01:24.051556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c22ceb55db29db43b6eec2cb53041f77f85553ee8fa00efced4fa1f34a8b1e8-rootfs.mount: Deactivated successfully. Jan 23 01:01:24.885832 kubelet[2724]: E0123 01:01:24.885772 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:24.889698 containerd[1552]: time="2026-01-23T01:01:24.889669627Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:01:24.905003 containerd[1552]: time="2026-01-23T01:01:24.904090453Z" level=info msg="Container 50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:24.904837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173878063.mount: Deactivated successfully. Jan 23 01:01:24.913038 containerd[1552]: time="2026-01-23T01:01:24.911982713Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2\"" Jan 23 01:01:24.913353 containerd[1552]: time="2026-01-23T01:01:24.913333228Z" level=info msg="StartContainer for \"50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2\"" Jan 23 01:01:24.915061 containerd[1552]: time="2026-01-23T01:01:24.915039987Z" level=info msg="connecting to shim 50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2" address="unix:///run/containerd/s/572bf39747003ffe34b4ea906b721a43458e1a58d93d07dfa926fbf0b4048a3d" protocol=ttrpc version=3 Jan 23 01:01:24.936138 systemd[1]: Started cri-containerd-50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2.scope - libcontainer container 50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2. Jan 23 01:01:24.966531 systemd[1]: cri-containerd-50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2.scope: Deactivated successfully. Jan 23 01:01:24.968670 containerd[1552]: time="2026-01-23T01:01:24.968641498Z" level=info msg="received container exit event container_id:\"50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2\" id:\"50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2\" pid:4651 exited_at:{seconds:1769130084 nanos:968418904}" Jan 23 01:01:24.977472 containerd[1552]: time="2026-01-23T01:01:24.977404155Z" level=info msg="StartContainer for \"50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2\" returns successfully" Jan 23 01:01:25.004416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50b40d61eb3eb6f15d76c7da442dfeda91d1c28d01f84cbe086b265faef962e2-rootfs.mount: Deactivated successfully. Jan 23 01:01:25.540711 kubelet[2724]: E0123 01:01:25.540661 2724 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:01:25.892528 kubelet[2724]: E0123 01:01:25.892132 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:25.900685 containerd[1552]: time="2026-01-23T01:01:25.900643040Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:01:25.911251 containerd[1552]: time="2026-01-23T01:01:25.911219535Z" level=info msg="Container 47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:25.921059 containerd[1552]: time="2026-01-23T01:01:25.921001970Z" level=info msg="CreateContainer within sandbox \"0c6957d5ea2d8e887f873c7449a3a1576e2cd0563e81cefe051552dac62127d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809\"" Jan 23 01:01:25.922032 containerd[1552]: time="2026-01-23T01:01:25.921977512Z" level=info msg="StartContainer for \"47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809\"" Jan 23 01:01:25.923354 containerd[1552]: time="2026-01-23T01:01:25.923298526Z" level=info msg="connecting to shim 47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809" address="unix:///run/containerd/s/572bf39747003ffe34b4ea906b721a43458e1a58d93d07dfa926fbf0b4048a3d" protocol=ttrpc version=3 Jan 23 01:01:25.955213 systemd[1]: Started cri-containerd-47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809.scope - libcontainer container 47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809. Jan 23 01:01:26.007896 containerd[1552]: time="2026-01-23T01:01:26.007858203Z" level=info msg="StartContainer for \"47b3998e5718294cccc43bb61dc22447e7cd3bf6430dd092be9e41271d227809\" returns successfully" Jan 23 01:01:26.497109 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 01:01:26.899188 kubelet[2724]: E0123 01:01:26.899068 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:27.439037 kubelet[2724]: E0123 01:01:27.439000 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:28.334463 kubelet[2724]: E0123 01:01:28.334427 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:29.486313 systemd-networkd[1427]: lxc_health: Link UP Jan 23 01:01:29.486729 systemd-networkd[1427]: lxc_health: Gained carrier Jan 23 01:01:30.336645 kubelet[2724]: E0123 01:01:30.336586 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:30.359793 kubelet[2724]: I0123 01:01:30.359733 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p9hc5" podStartSLOduration=9.359718974 podStartE2EDuration="9.359718974s" podCreationTimestamp="2026-01-23 01:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:01:26.924341565 +0000 UTC m=+166.612429100" watchObservedRunningTime="2026-01-23 01:01:30.359718974 +0000 UTC m=+170.047806509" Jan 23 01:01:30.909430 kubelet[2724]: E0123 01:01:30.909372 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:31.142422 systemd-networkd[1427]: lxc_health: Gained IPv6LL Jan 23 01:01:31.913046 kubelet[2724]: E0123 01:01:31.911625 2724 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Jan 23 01:01:33.334614 kubelet[2724]: E0123 01:01:33.334550 2724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48580->127.0.0.1:37353: write tcp 127.0.0.1:48580->127.0.0.1:37353: write: broken pipe Jan 23 01:01:35.520611 sshd[4545]: Connection closed by 68.220.241.50 port 34756 Jan 23 01:01:35.521499 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:35.530656 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:01:35.530950 systemd[1]: sshd@24-172.236.108.127:22-68.220.241.50:34756.service: Deactivated successfully. Jan 23 01:01:35.533508 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:01:35.535622 systemd-logind[1528]: Removed session 25.