Dec 12 18:42:50.919045 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:42:50.919079 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:42:50.919093 kernel: BIOS-provided physical RAM map: Dec 12 18:42:50.919103 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:42:50.919112 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:42:50.919123 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:42:50.919138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:42:50.919146 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:42:50.919152 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:42:50.919159 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:42:50.919165 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:42:50.919171 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:42:50.919177 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:42:50.919184 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:42:50.919193 kernel: NX (Execute Disable) protection: active Dec 12 18:42:50.919200 kernel: APIC: Static calls initialized Dec 12 18:42:50.919207 kernel: SMBIOS 2.8 present. Dec 12 18:42:50.919213 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:42:50.919220 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:42:50.919227 kernel: Hypervisor detected: KVM Dec 12 18:42:50.919235 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:42:50.919242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:42:50.919248 kernel: kvm-clock: using sched offset of 7061284997 cycles Dec 12 18:42:50.919255 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:42:50.919262 kernel: tsc: Detected 1999.998 MHz processor Dec 12 18:42:50.919269 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:42:50.919277 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:42:50.919283 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:42:50.919290 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:42:50.919297 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:42:50.919306 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:42:50.919313 kernel: Using GB pages for direct mapping Dec 12 18:42:50.919320 kernel: ACPI: Early table checksum verification disabled Dec 12 18:42:50.919326 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:42:50.919333 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919340 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919347 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919353 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:42:50.919360 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919369 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919380 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919387 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:50.919394 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:42:50.919401 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:42:50.919410 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:42:50.919417 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:42:50.919424 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:42:50.919431 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:42:50.919438 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:42:50.919445 kernel: No NUMA configuration found Dec 12 18:42:50.919452 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:42:50.919459 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Dec 12 18:42:50.919466 kernel: Zone ranges: Dec 12 18:42:50.919510 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:42:50.919522 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:42:50.919534 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:42:50.919546 kernel: Device empty Dec 12 18:42:50.919560 kernel: Movable zone start for each node Dec 12 18:42:50.919569 kernel: Early memory node ranges Dec 12 18:42:50.919577 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:42:50.919584 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:42:50.919591 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:42:50.919598 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:42:50.919612 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:42:50.919624 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:42:50.919635 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:42:50.919646 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:42:50.919658 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:42:50.919667 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:42:50.919674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:42:50.919681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:42:50.919688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:42:50.919698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:42:50.919706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:42:50.919713 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:42:50.919720 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:42:50.919727 kernel: TSC deadline timer available Dec 12 18:42:50.919734 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:42:50.919741 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:42:50.919748 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:42:50.919755 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:42:50.919764 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:42:50.919771 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:42:50.919778 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:42:50.919785 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:42:50.919792 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:42:50.919799 kernel: kvm-guest: setup PV sched yield Dec 12 18:42:50.919806 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:42:50.919814 kernel: Booting paravirtualized kernel on KVM Dec 12 18:42:50.919821 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:42:50.919830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:42:50.919837 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:42:50.919844 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:42:50.919851 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:42:50.919858 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:42:50.919865 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:42:50.919873 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:42:50.919881 kernel: random: crng init done Dec 12 18:42:50.919888 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:42:50.919897 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:42:50.919904 kernel: Fallback order for Node 0: 0 Dec 12 18:42:50.919912 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:42:50.919919 kernel: Policy zone: Normal Dec 12 18:42:50.919926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:42:50.919933 kernel: software IO TLB: area num 2. Dec 12 18:42:50.919940 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:42:50.919947 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:42:50.919954 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:42:50.919963 kernel: Dynamic Preempt: voluntary Dec 12 18:42:50.919970 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:42:50.919978 kernel: rcu: RCU event tracing is enabled. Dec 12 18:42:50.919985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:42:50.919992 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:42:50.920000 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:42:50.920007 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:42:50.920014 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:42:50.920021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:42:50.920031 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:42:50.920045 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:42:50.920052 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:42:50.920062 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:42:50.920069 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:42:50.920077 kernel: Console: colour VGA+ 80x25 Dec 12 18:42:50.920084 kernel: printk: legacy console [tty0] enabled Dec 12 18:42:50.920091 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:42:50.920099 kernel: ACPI: Core revision 20240827 Dec 12 18:42:50.920109 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:42:50.920116 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:42:50.920123 kernel: x2apic enabled Dec 12 18:42:50.920131 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:42:50.920171 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:42:50.920179 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:42:50.920186 kernel: kvm-guest: setup PV IPIs Dec 12 18:42:50.920196 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:42:50.920204 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Dec 12 18:42:50.920211 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Dec 12 18:42:50.920218 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:42:50.920225 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:42:50.920232 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:42:50.920239 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:42:50.920246 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:42:50.920253 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:42:50.920263 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:42:50.920270 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:42:50.920277 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:42:50.920284 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:42:50.920292 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:42:50.920299 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:42:50.920306 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:42:50.920313 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:42:50.920322 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:42:50.920329 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:42:50.920336 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:42:50.920343 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:42:50.920350 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:42:50.920358 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:42:50.920365 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:42:50.920372 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:42:50.920379 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:42:50.920388 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:42:50.920395 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:42:50.920402 kernel: landlock: Up and running. Dec 12 18:42:50.920409 kernel: SELinux: Initializing. Dec 12 18:42:50.920416 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:42:50.920423 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:42:50.920430 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:42:50.920437 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:42:50.920444 kernel: ... version: 0 Dec 12 18:42:50.920453 kernel: ... bit width: 48 Dec 12 18:42:50.920460 kernel: ... generic registers: 6 Dec 12 18:42:50.920467 kernel: ... value mask: 0000ffffffffffff Dec 12 18:42:50.920498 kernel: ... max period: 00007fffffffffff Dec 12 18:42:50.920507 kernel: ... fixed-purpose events: 0 Dec 12 18:42:50.920514 kernel: ... event mask: 000000000000003f Dec 12 18:42:50.920521 kernel: signal: max sigframe size: 3376 Dec 12 18:42:50.920528 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:42:50.920535 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:42:50.920545 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:42:50.920552 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:42:50.920559 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:42:50.920566 kernel: .... node #0, CPUs: #1 Dec 12 18:42:50.920573 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:42:50.920580 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Dec 12 18:42:50.920588 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235480K reserved, 0K cma-reserved) Dec 12 18:42:50.920595 kernel: devtmpfs: initialized Dec 12 18:42:50.920602 kernel: x86/mm: Memory block size: 128MB Dec 12 18:42:50.920609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:42:50.920618 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:42:50.920626 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:42:50.920633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:42:50.920639 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:42:50.920647 kernel: audit: type=2000 audit(1765564967.360:1): state=initialized audit_enabled=0 res=1 Dec 12 18:42:50.920653 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:42:50.920660 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:42:50.920667 kernel: cpuidle: using governor menu Dec 12 18:42:50.920677 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:42:50.920684 kernel: dca service started, version 1.12.1 Dec 12 18:42:50.920691 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:42:50.920698 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:42:50.920705 kernel: PCI: Using configuration type 1 for base access Dec 12 18:42:50.920712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:42:50.920719 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:42:50.920726 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:42:50.920733 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:42:50.920743 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:42:50.920750 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:42:50.920757 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:42:50.920764 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:42:50.920771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:42:50.920778 kernel: ACPI: Interpreter enabled Dec 12 18:42:50.920785 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:42:50.920792 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:42:50.920799 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:42:50.920808 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:42:50.920815 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:42:50.920822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:42:50.921010 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:42:50.921141 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:42:50.921263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:42:50.921272 kernel: PCI host bridge to bus 0000:00 Dec 12 18:42:50.921395 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:42:50.921536 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:42:50.921650 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:42:50.921760 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:42:50.921868 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:42:50.921977 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:42:50.922086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:42:50.922230 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:42:50.922368 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:42:50.922525 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:42:50.922684 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:42:50.922838 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:42:50.922962 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:42:50.923111 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:42:50.923241 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:42:50.923379 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:42:50.923904 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:42:50.924056 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:42:50.924184 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:42:50.924307 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:42:50.924426 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:42:50.924648 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:42:50.924789 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:42:50.924913 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:42:50.925040 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:42:50.925160 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:42:50.925299 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:42:50.925447 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:42:50.925592 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:42:50.925604 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:42:50.925611 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:42:50.925619 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:42:50.925626 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:42:50.925633 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:42:50.925641 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:42:50.925652 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:42:50.925659 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:42:50.925666 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:42:50.925673 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:42:50.925680 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:42:50.925688 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:42:50.925695 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:42:50.925702 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:42:50.925710 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:42:50.925719 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:42:50.925726 kernel: iommu: Default domain type: Translated Dec 12 18:42:50.925734 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:42:50.925741 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:42:50.925748 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:42:50.925755 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:42:50.925763 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:42:50.925882 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:42:50.926048 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:42:50.926172 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:42:50.926182 kernel: vgaarb: loaded Dec 12 18:42:50.926189 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:42:50.926197 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:42:50.926204 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:42:50.926211 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:42:50.926219 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:42:50.926226 kernel: pnp: PnP ACPI init Dec 12 18:42:50.926361 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:42:50.926373 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:42:50.926380 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:42:50.926388 kernel: NET: Registered PF_INET protocol family Dec 12 18:42:50.926395 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:42:50.926402 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:42:50.926410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:42:50.926417 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:42:50.926428 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:42:50.926435 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:42:50.926443 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:42:50.926450 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:42:50.926457 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:42:50.926465 kernel: NET: Registered PF_XDP protocol family Dec 12 18:42:50.926675 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:42:50.926789 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:42:50.926901 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:42:50.927017 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:42:50.927128 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:42:50.927269 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:42:50.927281 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:42:50.927289 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:42:50.927296 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:42:50.927304 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Dec 12 18:42:50.927311 kernel: Initialise system trusted keyrings Dec 12 18:42:50.927322 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:42:50.927330 kernel: Key type asymmetric registered Dec 12 18:42:50.927337 kernel: Asymmetric key parser 'x509' registered Dec 12 18:42:50.927344 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:42:50.927352 kernel: io scheduler mq-deadline registered Dec 12 18:42:50.927359 kernel: io scheduler kyber registered Dec 12 18:42:50.927366 kernel: io scheduler bfq registered Dec 12 18:42:50.927373 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:42:50.927381 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:42:50.927408 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:42:50.927438 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:42:50.927445 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:42:50.927453 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:42:50.927460 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:42:50.927467 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:42:50.927660 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:42:50.927673 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:42:50.927815 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:42:50.927937 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:42:50 UTC (1765564970) Dec 12 18:42:50.928050 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:42:50.928059 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:42:50.928066 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:42:50.928074 kernel: Segment Routing with IPv6 Dec 12 18:42:50.928081 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:42:50.928088 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:42:50.928096 kernel: Key type dns_resolver registered Dec 12 18:42:50.928106 kernel: IPI shorthand broadcast: enabled Dec 12 18:42:50.928114 kernel: sched_clock: Marking stable (2856004393, 336791449)->(3281878851, -89083009) Dec 12 18:42:50.928121 kernel: registered taskstats version 1 Dec 12 18:42:50.928128 kernel: Loading compiled-in X.509 certificates Dec 12 18:42:50.928136 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:42:50.928143 kernel: Demotion targets for Node 0: null Dec 12 18:42:50.928150 kernel: Key type .fscrypt registered Dec 12 18:42:50.928157 kernel: Key type fscrypt-provisioning registered Dec 12 18:42:50.928165 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:42:50.928174 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:42:50.928182 kernel: ima: No architecture policies found Dec 12 18:42:50.928189 kernel: clk: Disabling unused clocks Dec 12 18:42:50.928196 kernel: Warning: unable to open an initial console. Dec 12 18:42:50.928204 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:42:50.928211 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:42:50.928219 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:42:50.928226 kernel: Run /init as init process Dec 12 18:42:50.928233 kernel: with arguments: Dec 12 18:42:50.928243 kernel: /init Dec 12 18:42:50.928250 kernel: with environment: Dec 12 18:42:50.928272 kernel: HOME=/ Dec 12 18:42:50.928282 kernel: TERM=linux Dec 12 18:42:50.928290 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:42:50.928301 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:42:50.928310 systemd[1]: Detected virtualization kvm. Dec 12 18:42:50.928320 systemd[1]: Detected architecture x86-64. Dec 12 18:42:50.928328 systemd[1]: Running in initrd. Dec 12 18:42:50.928335 systemd[1]: No hostname configured, using default hostname. Dec 12 18:42:50.928343 systemd[1]: Hostname set to . Dec 12 18:42:50.928351 systemd[1]: Initializing machine ID from random generator. Dec 12 18:42:50.928359 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:42:50.928367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:42:50.928375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:42:50.928383 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:42:50.928394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:42:50.928402 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:42:50.928410 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:42:50.928419 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:42:50.928427 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:42:50.928435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:42:50.928446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:42:50.928454 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:42:50.928461 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:42:50.928469 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:42:50.928497 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:42:50.928506 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:42:50.928514 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:42:50.928522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:42:50.928530 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:42:50.928541 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:42:50.928549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:42:50.928559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:42:50.928567 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:42:50.928575 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:42:50.928585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:42:50.928593 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:42:50.928601 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:42:50.928609 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:42:50.928617 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:42:50.928625 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:42:50.928633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:50.928663 systemd-journald[187]: Collecting audit messages is disabled. Dec 12 18:42:50.928685 systemd-journald[187]: Journal started Dec 12 18:42:50.928735 systemd-journald[187]: Runtime Journal (/run/log/journal/4397d8ae36694d7c868e2d6b626cfc3c) is 8M, max 78.2M, 70.2M free. Dec 12 18:42:50.930821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:42:50.927729 systemd-modules-load[188]: Inserted module 'overlay' Dec 12 18:42:50.938333 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:42:50.940190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:42:50.942239 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:42:50.946632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:42:50.949975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:42:51.070071 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:42:51.070097 kernel: Bridge firewalling registered Dec 12 18:42:50.983251 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 12 18:42:51.072613 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:42:51.074996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:51.076990 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:42:51.079106 systemd-tmpfiles[199]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:42:51.082608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:42:51.086648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:42:51.090024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:42:51.092428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:42:51.108078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:51.109083 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:42:51.116620 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:42:51.118714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:42:51.128043 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:42:51.153598 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:42:51.163112 systemd-resolved[224]: Positive Trust Anchors: Dec 12 18:42:51.163680 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:42:51.163747 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:42:51.170441 systemd-resolved[224]: Defaulting to hostname 'linux'. Dec 12 18:42:51.171680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:42:51.172867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:42:51.248528 kernel: SCSI subsystem initialized Dec 12 18:42:51.258511 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:42:51.269505 kernel: iscsi: registered transport (tcp) Dec 12 18:42:51.290545 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:42:51.290622 kernel: QLogic iSCSI HBA Driver Dec 12 18:42:51.312463 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:42:51.330976 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:42:51.334622 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:42:51.389814 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:42:51.392202 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:42:51.447509 kernel: raid6: avx2x4 gen() 30993 MB/s Dec 12 18:42:51.465519 kernel: raid6: avx2x2 gen() 29267 MB/s Dec 12 18:42:51.483632 kernel: raid6: avx2x1 gen() 20293 MB/s Dec 12 18:42:51.483670 kernel: raid6: using algorithm avx2x4 gen() 30993 MB/s Dec 12 18:42:51.503852 kernel: raid6: .... xor() 4568 MB/s, rmw enabled Dec 12 18:42:51.503881 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:42:51.526512 kernel: xor: automatically using best checksumming function avx Dec 12 18:42:51.664517 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:42:51.672092 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:42:51.675013 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:42:51.705811 systemd-udevd[435]: Using default interface naming scheme 'v255'. Dec 12 18:42:51.711965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:42:51.715779 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:42:51.741546 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Dec 12 18:42:51.772006 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:42:51.774275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:42:51.848901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:42:51.851966 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:42:51.922546 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:42:51.932518 kernel: libata version 3.00 loaded. Dec 12 18:42:51.950505 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:42:51.954506 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:42:51.965795 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:42:51.962269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:42:51.964902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:51.968127 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:52.107857 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:42:51.973174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:52.129556 kernel: AES CTR mode by8 optimization enabled Dec 12 18:42:52.129589 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:42:52.129602 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:42:52.147289 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:42:52.147574 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:42:52.147733 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:42:52.200528 kernel: scsi host1: ahci Dec 12 18:42:52.200773 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 12 18:42:52.200976 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:42:52.201133 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 18:42:52.201281 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:42:52.201428 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:42:52.201609 kernel: scsi host2: ahci Dec 12 18:42:52.202502 kernel: scsi host3: ahci Dec 12 18:42:52.202684 kernel: scsi host4: ahci Dec 12 18:42:52.202839 kernel: scsi host5: ahci Dec 12 18:42:52.204525 kernel: scsi host6: ahci Dec 12 18:42:52.204842 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Dec 12 18:42:52.204861 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Dec 12 18:42:52.204873 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Dec 12 18:42:52.204895 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Dec 12 18:42:52.204906 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Dec 12 18:42:52.204917 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Dec 12 18:42:52.206524 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:42:52.206560 kernel: GPT:9289727 != 167739391 Dec 12 18:42:52.206573 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:42:52.206584 kernel: GPT:9289727 != 167739391 Dec 12 18:42:52.206596 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:42:52.206620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:52.206639 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 18:42:52.340862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:52.517929 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:52.517992 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:52.518006 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:52.518017 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:52.518507 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:52.520514 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:52.591277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:42:52.600450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:42:52.609058 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 18:42:52.609858 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:42:52.611966 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:42:52.622207 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:42:52.624550 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:42:52.625334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:42:52.627147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:42:52.629562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:42:52.632814 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:42:52.650728 disk-uuid[614]: Primary Header is updated. Dec 12 18:42:52.650728 disk-uuid[614]: Secondary Entries is updated. Dec 12 18:42:52.650728 disk-uuid[614]: Secondary Header is updated. Dec 12 18:42:52.657246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:42:52.661167 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:53.679648 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:53.680984 disk-uuid[617]: The operation has completed successfully. Dec 12 18:42:53.737559 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:42:53.737730 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:42:53.778191 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:42:53.798116 sh[636]: Success Dec 12 18:42:53.820510 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:42:53.820553 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:42:53.826281 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:42:53.838636 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:42:53.890727 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:42:53.894058 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:42:53.906772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:42:53.920518 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (648) Dec 12 18:42:53.925003 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:42:53.925046 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:53.938112 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:42:53.938208 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:42:53.938224 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:42:53.942923 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:42:53.944511 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:42:53.945703 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:42:53.946647 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:42:53.951663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:42:53.984540 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (680) Dec 12 18:42:53.991469 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:53.991507 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:54.003389 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:42:54.003452 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:42:54.003470 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:42:54.012517 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:54.014148 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:42:54.018528 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:42:54.102882 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:42:54.108162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:42:54.156835 ignition[754]: Ignition 2.22.0 Dec 12 18:42:54.156852 ignition[754]: Stage: fetch-offline Dec 12 18:42:54.156893 ignition[754]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:54.156903 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:54.162205 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:42:54.156996 ignition[754]: parsed url from cmdline: "" Dec 12 18:42:54.157001 ignition[754]: no config URL provided Dec 12 18:42:54.157007 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:42:54.157016 ignition[754]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:42:54.167380 systemd-networkd[817]: lo: Link UP Dec 12 18:42:54.157021 ignition[754]: failed to fetch config: resource requires networking Dec 12 18:42:54.167385 systemd-networkd[817]: lo: Gained carrier Dec 12 18:42:54.157159 ignition[754]: Ignition finished successfully Dec 12 18:42:54.169724 systemd-networkd[817]: Enumeration completed Dec 12 18:42:54.170112 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:42:54.170115 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:42:54.170120 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:42:54.172071 systemd-networkd[817]: eth0: Link UP Dec 12 18:42:54.172783 systemd[1]: Reached target network.target - Network. Dec 12 18:42:54.173338 systemd-networkd[817]: eth0: Gained carrier Dec 12 18:42:54.173348 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:42:54.176811 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:42:54.212154 ignition[825]: Ignition 2.22.0 Dec 12 18:42:54.212168 ignition[825]: Stage: fetch Dec 12 18:42:54.212279 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:54.212290 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:54.212376 ignition[825]: parsed url from cmdline: "" Dec 12 18:42:54.212381 ignition[825]: no config URL provided Dec 12 18:42:54.212387 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:42:54.212396 ignition[825]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:42:54.212424 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:42:54.212613 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:42:54.412821 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 12 18:42:54.412999 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:42:54.813344 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 12 18:42:54.813515 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:42:55.023549 systemd-networkd[817]: eth0: DHCPv4 address 172.238.172.51/24, gateway 172.238.172.1 acquired from 23.213.15.82 Dec 12 18:42:55.294601 systemd-networkd[817]: eth0: Gained IPv6LL Dec 12 18:42:55.613688 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 12 18:42:55.693702 ignition[825]: PUT result: OK Dec 12 18:42:55.693760 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:42:55.804349 ignition[825]: GET result: OK Dec 12 18:42:55.804516 ignition[825]: parsing config with SHA512: 2178818fd95c9f8f650aa8e48499e822091920a4980032d88504723044452da70246ca01af55c4760df4ab64783bd7a37a20dff5e1adee9d66379861fe6fd81c Dec 12 18:42:55.809601 unknown[825]: fetched base config from "system" Dec 12 18:42:55.809861 ignition[825]: fetch: fetch complete Dec 12 18:42:55.809612 unknown[825]: fetched base config from "system" Dec 12 18:42:55.809867 ignition[825]: fetch: fetch passed Dec 12 18:42:55.809617 unknown[825]: fetched user config from "akamai" Dec 12 18:42:55.809906 ignition[825]: Ignition finished successfully Dec 12 18:42:55.813787 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:42:55.838085 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:42:55.870772 ignition[833]: Ignition 2.22.0 Dec 12 18:42:55.870799 ignition[833]: Stage: kargs Dec 12 18:42:55.870971 ignition[833]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:55.870989 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:55.871742 ignition[833]: kargs: kargs passed Dec 12 18:42:55.873614 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:42:55.871785 ignition[833]: Ignition finished successfully Dec 12 18:42:55.877586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:42:55.899968 ignition[839]: Ignition 2.22.0 Dec 12 18:42:55.899993 ignition[839]: Stage: disks Dec 12 18:42:55.900218 ignition[839]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:55.900234 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:55.900952 ignition[839]: disks: disks passed Dec 12 18:42:55.903684 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:42:55.900999 ignition[839]: Ignition finished successfully Dec 12 18:42:55.904958 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:42:55.906195 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:42:55.907655 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:42:55.909211 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:42:55.911018 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:42:55.914607 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:42:55.941731 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:42:55.947565 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:42:55.950059 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:42:56.070502 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:42:56.071038 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:42:56.072416 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:42:56.074980 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:42:56.077552 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:42:56.080224 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:42:56.080277 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:42:56.080303 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:42:56.092748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:42:56.096060 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Dec 12 18:42:56.099743 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:56.099767 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:56.106616 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:42:56.106642 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:42:56.106605 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:42:56.111742 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:42:56.113937 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:42:56.162742 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:42:56.169993 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:42:56.175544 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:42:56.180357 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:42:56.280432 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:42:56.283593 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:42:56.285925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:42:56.299583 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:42:56.305153 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:56.320892 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:42:56.334664 ignition[968]: INFO : Ignition 2.22.0 Dec 12 18:42:56.334664 ignition[968]: INFO : Stage: mount Dec 12 18:42:56.336321 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:56.336321 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:56.336321 ignition[968]: INFO : mount: mount passed Dec 12 18:42:56.336321 ignition[968]: INFO : Ignition finished successfully Dec 12 18:42:56.338264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:42:56.341613 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:42:57.072788 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:42:57.103517 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Dec 12 18:42:57.110426 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:57.110540 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:57.117814 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:42:57.117857 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:42:57.117876 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:42:57.122413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:42:57.157776 ignition[995]: INFO : Ignition 2.22.0 Dec 12 18:42:57.157776 ignition[995]: INFO : Stage: files Dec 12 18:42:57.159631 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:57.159631 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:57.159631 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:42:57.162851 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:42:57.162851 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:42:57.164899 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:42:57.164899 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:42:57.167305 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:42:57.166158 unknown[995]: wrote ssh authorized keys file for user: core Dec 12 18:42:57.169701 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:42:57.169701 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:42:57.286585 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:42:57.332232 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:42:57.332232 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:42:57.334673 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 12 18:42:57.544118 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 18:42:57.598389 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:42:57.598389 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:42:57.601416 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:42:57.632253 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:57.632253 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:57.632253 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:57.632253 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:42:57.950420 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 18:42:58.240534 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:58.240534 ignition[995]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 18:42:58.243421 ignition[995]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:42:58.244998 ignition[995]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:42:58.244998 ignition[995]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 18:42:58.244998 ignition[995]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 18:42:58.249036 ignition[995]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:42:58.249036 ignition[995]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:42:58.249036 ignition[995]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 18:42:58.249036 ignition[995]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:42:58.249036 ignition[995]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:42:58.249036 ignition[995]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:42:58.249036 ignition[995]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:42:58.249036 ignition[995]: INFO : files: files passed Dec 12 18:42:58.249036 ignition[995]: INFO : Ignition finished successfully Dec 12 18:42:58.248523 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:42:58.250833 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:42:58.254976 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:42:58.264630 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:42:58.264745 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:42:58.277041 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:42:58.278342 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:42:58.279523 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:42:58.280865 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:42:58.282361 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:42:58.284142 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:42:58.349232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:42:58.349394 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:42:58.351602 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:42:58.352797 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:42:58.354569 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:42:58.355366 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:42:58.376087 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:42:58.379934 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:42:58.403786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:42:58.404723 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:42:58.406574 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:42:58.408256 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:42:58.408413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:42:58.410215 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:42:58.411349 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:42:58.412961 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:42:58.414510 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:42:58.415918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:42:58.417630 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:42:58.419340 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:42:58.421040 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:42:58.422849 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:42:58.424441 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:42:58.426124 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:42:58.427850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:42:58.427985 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:42:58.429772 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:42:58.430903 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:42:58.432398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:42:58.432897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:42:58.434285 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:42:58.434516 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:42:58.436427 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:42:58.436568 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:42:58.437620 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:42:58.437720 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:42:58.441573 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:42:58.442513 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:42:58.442672 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:42:58.447577 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:42:58.448302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:42:58.448464 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:42:58.449990 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:42:58.450162 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:42:58.495340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:42:58.495463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:42:58.503560 ignition[1049]: INFO : Ignition 2.22.0 Dec 12 18:42:58.503560 ignition[1049]: INFO : Stage: umount Dec 12 18:42:58.503560 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:58.503560 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:58.503560 ignition[1049]: INFO : umount: umount passed Dec 12 18:42:58.503560 ignition[1049]: INFO : Ignition finished successfully Dec 12 18:42:58.504934 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:42:58.505105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:42:58.506872 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:42:58.506963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:42:58.509748 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:42:58.509801 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:42:58.511589 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:42:58.511661 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:42:58.514004 systemd[1]: Stopped target network.target - Network. Dec 12 18:42:58.516618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:42:58.516676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:42:58.517557 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:42:58.518279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:42:58.518537 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:42:58.519844 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:42:58.521348 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:42:58.522856 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:42:58.522904 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:42:58.524305 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:42:58.524347 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:42:58.525751 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:42:58.525808 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:42:58.527192 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:42:58.527240 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:42:58.528824 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:42:58.530411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:42:58.534769 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:42:58.535404 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:42:58.535564 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:42:58.537636 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:42:58.537767 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:42:58.542940 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:42:58.543221 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:42:58.543367 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:42:58.546023 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:42:58.547261 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:42:58.548342 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:42:58.548388 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:42:58.550016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:42:58.550074 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:42:58.552939 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:42:58.554956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:42:58.555009 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:42:58.557490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:42:58.557545 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:58.559892 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:42:58.559976 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:42:58.560797 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:42:58.560856 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:42:58.563106 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:42:58.570202 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:42:58.570271 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:42:58.582924 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:42:58.589889 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:42:58.592291 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:42:58.592397 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:42:58.594203 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:42:58.594267 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:42:58.595505 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:42:58.595564 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:42:58.597410 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:42:58.597466 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:42:58.599439 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:42:58.599513 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:42:58.601182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:42:58.601230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:42:58.604580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:42:58.606829 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:42:58.606884 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:42:58.609425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:42:58.609501 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:42:58.611582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:42:58.611634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:58.614243 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:42:58.614300 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:42:58.614349 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:42:58.621417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:42:58.621558 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:42:58.624320 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:42:58.626306 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:42:58.666304 systemd[1]: Switching root. Dec 12 18:42:58.705443 systemd-journald[187]: Journal stopped Dec 12 18:43:00.050424 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 12 18:43:00.050459 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:43:00.050472 kernel: SELinux: policy capability open_perms=1 Dec 12 18:43:00.050504 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:43:00.050514 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:43:00.050527 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:43:00.050537 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:43:00.050547 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:43:00.050556 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:43:00.050566 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:43:00.050576 kernel: audit: type=1403 audit(1765564978.881:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:43:00.050586 systemd[1]: Successfully loaded SELinux policy in 73.341ms. Dec 12 18:43:00.050600 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.897ms. Dec 12 18:43:00.050612 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:43:00.050623 systemd[1]: Detected virtualization kvm. Dec 12 18:43:00.050633 systemd[1]: Detected architecture x86-64. Dec 12 18:43:00.050646 systemd[1]: Detected first boot. Dec 12 18:43:00.050656 systemd[1]: Initializing machine ID from random generator. Dec 12 18:43:00.050667 zram_generator::config[1094]: No configuration found. Dec 12 18:43:00.050678 kernel: Guest personality initialized and is inactive Dec 12 18:43:00.050688 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:43:00.050699 kernel: Initialized host personality Dec 12 18:43:00.050708 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:43:00.050719 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:43:00.050732 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:43:00.050743 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:43:00.050753 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:43:00.050763 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:43:00.050774 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:43:00.050784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:43:00.050795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:43:00.050808 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:43:00.050819 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:43:00.050829 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:43:00.050840 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:43:00.050851 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:43:00.050862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:43:00.050872 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:43:00.050883 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:43:00.050895 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:43:00.050909 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:43:00.050920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:43:00.050931 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:43:00.050942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:43:00.050953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:43:00.050964 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:43:00.050978 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:43:00.050988 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:43:00.050999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:43:00.051010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:43:00.051021 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:43:00.051034 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:43:00.051044 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:43:00.051055 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:43:00.051066 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:43:00.051079 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:43:00.051090 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:43:00.051101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:43:00.051112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:43:00.051125 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:43:00.051136 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:43:00.051147 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:43:00.051157 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:43:00.051168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:00.051179 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:43:00.051190 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:43:00.051201 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:43:00.051215 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:43:00.051226 systemd[1]: Reached target machines.target - Containers. Dec 12 18:43:00.051236 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:43:00.051247 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:00.051259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:43:00.051271 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:43:00.051282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:43:00.051293 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:43:00.051304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:43:00.051317 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:43:00.051327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:43:00.051338 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:43:00.051348 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:43:00.051359 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:43:00.051369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:43:00.051379 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:43:00.051390 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:00.051403 kernel: fuse: init (API version 7.41) Dec 12 18:43:00.051413 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:43:00.051423 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:43:00.051434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:43:00.051444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:43:00.051455 kernel: ACPI: bus type drm_connector registered Dec 12 18:43:00.051465 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:43:00.056215 kernel: loop: module loaded Dec 12 18:43:00.056241 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:43:00.056254 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:43:00.056265 systemd[1]: Stopped verity-setup.service. Dec 12 18:43:00.056277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:00.056288 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:43:00.056299 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:43:00.056310 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:43:00.056321 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:43:00.056332 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:43:00.056346 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:43:00.056356 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:43:00.056391 systemd-journald[1178]: Collecting audit messages is disabled. Dec 12 18:43:00.056414 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:43:00.056428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:43:00.056439 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:43:00.056450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:43:00.056461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:43:00.056471 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:43:00.056503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:43:00.056515 systemd-journald[1178]: Journal started Dec 12 18:43:00.056538 systemd-journald[1178]: Runtime Journal (/run/log/journal/ec30a26eab9f437ea4b46abac79dd50d) is 8M, max 78.2M, 70.2M free. Dec 12 18:42:59.584606 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:42:59.606412 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:42:59.607050 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:43:00.060504 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:43:00.063063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:43:00.063278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:43:00.064462 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:43:00.064780 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:43:00.065860 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:43:00.066134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:43:00.067437 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:43:00.068673 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:43:00.069799 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:43:00.071122 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:43:00.085886 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:43:00.089561 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:43:00.092552 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:43:00.095539 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:43:00.095568 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:43:00.097280 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:43:00.108557 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:43:00.110786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:00.112636 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:43:00.115603 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:43:00.116497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:43:00.118650 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:43:00.119425 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:43:00.122000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:43:00.126468 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:43:00.129551 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:43:00.137718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:43:00.142001 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:43:00.154922 systemd-journald[1178]: Time spent on flushing to /var/log/journal/ec30a26eab9f437ea4b46abac79dd50d is 97.963ms for 1009 entries. Dec 12 18:43:00.154922 systemd-journald[1178]: System Journal (/var/log/journal/ec30a26eab9f437ea4b46abac79dd50d) is 8M, max 195.6M, 187.6M free. Dec 12 18:43:00.268786 systemd-journald[1178]: Received client request to flush runtime journal. Dec 12 18:43:00.268838 kernel: loop0: detected capacity change from 0 to 128560 Dec 12 18:43:00.268868 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:43:00.268889 kernel: loop1: detected capacity change from 0 to 110984 Dec 12 18:43:00.144527 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:43:00.169623 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:43:00.175059 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:43:00.181851 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:43:00.224301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:43:00.240963 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:43:00.274206 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:43:00.278934 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:43:00.285282 kernel: loop2: detected capacity change from 0 to 8 Dec 12 18:43:00.283653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:43:00.308940 kernel: loop3: detected capacity change from 0 to 229808 Dec 12 18:43:00.328338 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 12 18:43:00.328356 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 12 18:43:00.341983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:43:00.371496 kernel: loop4: detected capacity change from 0 to 128560 Dec 12 18:43:00.393828 kernel: loop5: detected capacity change from 0 to 110984 Dec 12 18:43:00.411663 kernel: loop6: detected capacity change from 0 to 8 Dec 12 18:43:00.417502 kernel: loop7: detected capacity change from 0 to 229808 Dec 12 18:43:00.437896 (sd-merge)[1243]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 12 18:43:00.438886 (sd-merge)[1243]: Merged extensions into '/usr'. Dec 12 18:43:00.447090 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:43:00.447194 systemd[1]: Reloading... Dec 12 18:43:00.552525 zram_generator::config[1281]: No configuration found. Dec 12 18:43:00.706429 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:43:00.782206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:43:00.782524 systemd[1]: Reloading finished in 334 ms. Dec 12 18:43:00.817302 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:43:00.818460 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:43:00.819590 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:43:00.828766 systemd[1]: Starting ensure-sysext.service... Dec 12 18:43:00.830588 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:43:00.834650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:43:00.855054 systemd[1]: Reload requested from client PID 1313 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:43:00.855068 systemd[1]: Reloading... Dec 12 18:43:00.861002 systemd-tmpfiles[1314]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:43:00.861317 systemd-tmpfiles[1314]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:43:00.861660 systemd-tmpfiles[1314]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:43:00.861929 systemd-tmpfiles[1314]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:43:00.863995 systemd-tmpfiles[1314]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:43:00.864264 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Dec 12 18:43:00.864350 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Dec 12 18:43:00.874735 systemd-tmpfiles[1314]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:43:00.874752 systemd-tmpfiles[1314]: Skipping /boot Dec 12 18:43:00.889090 systemd-tmpfiles[1314]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:43:00.889108 systemd-tmpfiles[1314]: Skipping /boot Dec 12 18:43:00.889884 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Dec 12 18:43:00.930513 zram_generator::config[1342]: No configuration found. Dec 12 18:43:01.233520 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:43:01.248565 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:43:01.258513 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:43:01.316083 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:43:01.319201 systemd[1]: Reloading finished in 463 ms. Dec 12 18:43:01.327612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:43:01.330377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:43:01.340180 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:43:01.345127 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:43:01.408238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:01.412716 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:43:01.416870 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:43:01.419744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:01.426537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:43:01.430757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:43:01.435617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:43:01.436708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:01.436827 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:01.439719 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:43:01.447798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:43:01.456540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:43:01.466089 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:43:01.467553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:01.475960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:01.476153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:01.476339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:01.476433 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:01.476557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:01.482693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:43:01.483030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:43:01.487143 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:01.487427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:01.498529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:43:01.499624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:01.499736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:01.508818 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:43:01.509825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:01.518807 systemd[1]: Finished ensure-sysext.service. Dec 12 18:43:01.530288 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:43:01.557722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:43:01.568385 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:43:01.564816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:43:01.576609 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:43:01.579683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:43:01.581231 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:43:01.581455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:43:01.587422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:43:01.588892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:43:01.595253 augenrules[1477]: No rules Dec 12 18:43:01.597090 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:43:01.597860 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:43:01.610990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:43:01.618523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:43:01.619530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:43:01.619614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:43:01.640998 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:43:01.645747 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:43:01.681051 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:43:01.683654 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:43:01.685662 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:43:01.695844 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:43:01.722642 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:43:01.841908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:43:01.895815 systemd-networkd[1450]: lo: Link UP Dec 12 18:43:01.895825 systemd-networkd[1450]: lo: Gained carrier Dec 12 18:43:01.898030 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:43:01.898540 systemd-networkd[1450]: Enumeration completed Dec 12 18:43:01.899028 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:01.899097 systemd-networkd[1450]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:43:01.899225 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:43:01.900059 systemd-networkd[1450]: eth0: Link UP Dec 12 18:43:01.900355 systemd-networkd[1450]: eth0: Gained carrier Dec 12 18:43:01.900425 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:01.901715 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:43:01.907652 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:43:01.908541 systemd-resolved[1451]: Positive Trust Anchors: Dec 12 18:43:01.908550 systemd-resolved[1451]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:43:01.908579 systemd-resolved[1451]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:43:01.910449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:43:01.916117 systemd-resolved[1451]: Defaulting to hostname 'linux'. Dec 12 18:43:01.917907 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:43:01.918764 systemd[1]: Reached target network.target - Network. Dec 12 18:43:01.919449 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:43:01.920263 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:43:01.945427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:43:01.946332 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:43:01.947133 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:43:01.948082 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:43:01.948926 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:43:01.949697 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:43:01.950545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:43:01.950591 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:43:01.951274 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:43:01.954569 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:43:01.957104 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:43:01.960060 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:43:01.961126 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:43:01.961963 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:43:01.965072 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:43:01.966638 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:43:01.968578 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:43:01.969568 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:43:01.972274 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:43:01.973337 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:43:01.974262 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:43:01.974354 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:43:01.975521 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:43:01.978594 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:43:01.981599 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:43:01.986167 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:43:01.988984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:43:01.993595 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:43:01.995575 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:43:01.998333 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:43:02.001969 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:43:02.006550 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:43:02.010676 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:43:02.017876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:43:02.024829 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:43:02.027028 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:43:02.027450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:43:02.030186 jq[1512]: false Dec 12 18:43:02.036690 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:43:02.040634 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Dec 12 18:43:02.044803 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Dec 12 18:43:02.044803 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Dec 12 18:43:02.044803 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:43:02.044803 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Dec 12 18:43:02.044463 oslogin_cache_refresh[1514]: Failure getting users, quitting Dec 12 18:43:02.045296 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Dec 12 18:43:02.045296 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:43:02.044507 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:43:02.044551 oslogin_cache_refresh[1514]: Refreshing group entry cache Dec 12 18:43:02.045000 oslogin_cache_refresh[1514]: Failure getting groups, quitting Dec 12 18:43:02.045009 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:43:02.045722 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:43:02.056149 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:43:02.057903 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:43:02.058643 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:43:02.059052 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:43:02.067966 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:43:02.076204 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:43:02.077826 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:43:02.091541 update_engine[1522]: I20251212 18:43:02.090351 1522 main.cc:92] Flatcar Update Engine starting Dec 12 18:43:02.093100 jq[1523]: true Dec 12 18:43:02.098320 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:43:02.100593 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:43:02.111199 extend-filesystems[1513]: Found /dev/sda6 Dec 12 18:43:02.116056 coreos-metadata[1509]: Dec 12 18:43:02.114 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:43:02.119065 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:43:02.123120 extend-filesystems[1513]: Found /dev/sda9 Dec 12 18:43:02.138522 extend-filesystems[1513]: Checking size of /dev/sda9 Dec 12 18:43:02.144185 jq[1543]: true Dec 12 18:43:02.150351 tar[1534]: linux-amd64/LICENSE Dec 12 18:43:02.153652 tar[1534]: linux-amd64/helm Dec 12 18:43:02.161923 dbus-daemon[1510]: [system] SELinux support is enabled Dec 12 18:43:02.163513 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:43:02.167649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:43:02.167678 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:43:02.169109 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:43:02.169129 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:43:02.182562 extend-filesystems[1513]: Resized partition /dev/sda9 Dec 12 18:43:02.192403 extend-filesystems[1560]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:43:02.198422 update_engine[1522]: I20251212 18:43:02.191748 1522 update_check_scheduler.cc:74] Next update check in 3m25s Dec 12 18:43:02.190820 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:43:02.207895 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 12 18:43:02.208552 systemd-logind[1521]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:43:02.208590 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:43:02.210654 systemd-logind[1521]: New seat seat0. Dec 12 18:43:02.214752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:43:02.215732 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:43:02.328606 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:43:02.332037 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:43:02.338450 systemd[1]: Starting sshkeys.service... Dec 12 18:43:02.396133 containerd[1544]: time="2025-12-12T18:43:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:43:02.397008 containerd[1544]: time="2025-12-12T18:43:02.396986074Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:43:02.406495 containerd[1544]: time="2025-12-12T18:43:02.406453004Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.07µs" Dec 12 18:43:02.406570 containerd[1544]: time="2025-12-12T18:43:02.406554404Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:43:02.406622 containerd[1544]: time="2025-12-12T18:43:02.406610464Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:43:02.406811 containerd[1544]: time="2025-12-12T18:43:02.406794434Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:43:02.406862 containerd[1544]: time="2025-12-12T18:43:02.406850954Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:43:02.406958 containerd[1544]: time="2025-12-12T18:43:02.406944524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407070 containerd[1544]: time="2025-12-12T18:43:02.407053374Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407116 containerd[1544]: time="2025-12-12T18:43:02.407105864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407381 containerd[1544]: time="2025-12-12T18:43:02.407362244Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407430 containerd[1544]: time="2025-12-12T18:43:02.407419025Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407501 containerd[1544]: time="2025-12-12T18:43:02.407462015Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407943 containerd[1544]: time="2025-12-12T18:43:02.407548515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407943 containerd[1544]: time="2025-12-12T18:43:02.407649495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407943 containerd[1544]: time="2025-12-12T18:43:02.407878625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407943 containerd[1544]: time="2025-12-12T18:43:02.407909515Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:43:02.407943 containerd[1544]: time="2025-12-12T18:43:02.407919285Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:43:02.408085 containerd[1544]: time="2025-12-12T18:43:02.408065135Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:43:02.408417 containerd[1544]: time="2025-12-12T18:43:02.408401146Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:43:02.408542 containerd[1544]: time="2025-12-12T18:43:02.408526616Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:43:02.451749 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455412363Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455632923Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455656773Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455682663Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455703383Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455720733Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455735803Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455750393Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455766303Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455780713Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455793973Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455810663Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.455979473Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:43:02.457551 containerd[1544]: time="2025-12-12T18:43:02.456006363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456026723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456043383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456059823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456074533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456093673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456109993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456126923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456142463Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456157923Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456223173Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456242273Z" level=info msg="Start snapshots syncer" Dec 12 18:43:02.459282 containerd[1544]: time="2025-12-12T18:43:02.456274823Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:43:02.457950 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:43:02.470641 containerd[1544]: time="2025-12-12T18:43:02.469394176Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:43:02.470641 containerd[1544]: time="2025-12-12T18:43:02.470250767Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:43:02.477425 containerd[1544]: time="2025-12-12T18:43:02.477249764Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:43:02.477853 containerd[1544]: time="2025-12-12T18:43:02.477835445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:43:02.478042 containerd[1544]: time="2025-12-12T18:43:02.478023275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:43:02.478164 containerd[1544]: time="2025-12-12T18:43:02.478147915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:43:02.478249 containerd[1544]: time="2025-12-12T18:43:02.478224495Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:43:02.478349 containerd[1544]: time="2025-12-12T18:43:02.478333525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480343967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480367577Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480422658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480436038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480448168Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480517858Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480536538Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480544728Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480553298Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480699238Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480712638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480729048Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:43:02.480828 containerd[1544]: time="2025-12-12T18:43:02.480746428Z" level=info msg="runtime interface created" Dec 12 18:43:02.482939 containerd[1544]: time="2025-12-12T18:43:02.481125908Z" level=info msg="created NRI interface" Dec 12 18:43:02.482939 containerd[1544]: time="2025-12-12T18:43:02.481149998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:43:02.482939 containerd[1544]: time="2025-12-12T18:43:02.481164998Z" level=info msg="Connect containerd service" Dec 12 18:43:02.482939 containerd[1544]: time="2025-12-12T18:43:02.481183068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:43:02.492253 containerd[1544]: time="2025-12-12T18:43:02.490621198Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:43:02.506092 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:43:02.539955 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:43:02.546246 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:43:02.581212 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:43:02.583181 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:43:02.588150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:43:02.601392 coreos-metadata[1587]: Dec 12 18:43:02.601 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:43:02.606298 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:43:02.618564 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:43:02.626707 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:43:02.632115 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:43:02.634943 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:43:02.651030 containerd[1544]: time="2025-12-12T18:43:02.650888578Z" level=info msg="Start subscribing containerd event" Dec 12 18:43:02.651419 containerd[1544]: time="2025-12-12T18:43:02.651381908Z" level=info msg="Start recovering state" Dec 12 18:43:02.652052 containerd[1544]: time="2025-12-12T18:43:02.652036389Z" level=info msg="Start event monitor" Dec 12 18:43:02.652197 containerd[1544]: time="2025-12-12T18:43:02.652182779Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:43:02.652610 containerd[1544]: time="2025-12-12T18:43:02.652469240Z" level=info msg="Start streaming server" Dec 12 18:43:02.652785 containerd[1544]: time="2025-12-12T18:43:02.652765300Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:43:02.653776 containerd[1544]: time="2025-12-12T18:43:02.653675741Z" level=info msg="runtime interface starting up..." Dec 12 18:43:02.653776 containerd[1544]: time="2025-12-12T18:43:02.653690511Z" level=info msg="starting plugins..." Dec 12 18:43:02.653776 containerd[1544]: time="2025-12-12T18:43:02.653709681Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:43:02.654050 containerd[1544]: time="2025-12-12T18:43:02.652575800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:43:02.654973 containerd[1544]: time="2025-12-12T18:43:02.654570202Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:43:02.656623 containerd[1544]: time="2025-12-12T18:43:02.655629033Z" level=info msg="containerd successfully booted in 0.259914s" Dec 12 18:43:02.655666 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:43:02.674669 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 12 18:43:02.686497 extend-filesystems[1560]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:43:02.686497 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:43:02.686497 extend-filesystems[1560]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 12 18:43:02.693042 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Dec 12 18:43:02.688019 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:43:02.689238 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:43:02.818323 tar[1534]: linux-amd64/README.md Dec 12 18:43:02.831785 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:43:03.102643 systemd-networkd[1450]: eth0: Gained IPv6LL Dec 12 18:43:03.103251 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:03.124027 coreos-metadata[1509]: Dec 12 18:43:03.123 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:43:03.610922 coreos-metadata[1587]: Dec 12 18:43:03.610 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:43:04.107571 systemd-networkd[1450]: eth0: DHCPv4 address 172.238.172.51/24, gateway 172.238.172.1 acquired from 23.213.15.82 Dec 12 18:43:04.107695 dbus-daemon[1510]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1450 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:43:04.108615 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:04.109853 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:04.110721 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:04.112699 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:43:04.115092 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:43:04.118553 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:43:04.128815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:04.131111 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:43:04.172376 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:43:04.200162 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:43:04.201761 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:43:04.203110 dbus-daemon[1510]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1631 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:43:04.209704 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:43:04.287872 polkitd[1644]: Started polkitd version 126 Dec 12 18:43:04.292721 polkitd[1644]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:43:04.293009 polkitd[1644]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:43:04.293096 polkitd[1644]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:43:04.293304 polkitd[1644]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:43:04.293332 polkitd[1644]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:43:04.293367 polkitd[1644]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:43:04.294437 polkitd[1644]: Finished loading, compiling and executing 2 rules Dec 12 18:43:04.294803 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:43:04.296020 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:43:04.296930 polkitd[1644]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:43:04.309423 systemd-hostnamed[1631]: Hostname set to <172-238-172-51> (transient) Dec 12 18:43:04.310240 systemd-resolved[1451]: System hostname changed to '172-238-172-51'. Dec 12 18:43:05.063059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:05.080880 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:43:05.135437 coreos-metadata[1509]: Dec 12 18:43:05.135 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Dec 12 18:43:05.208821 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:43:05.213746 systemd[1]: Started sshd@0-172.238.172.51:22-139.178.68.195:53370.service - OpenSSH per-connection server daemon (139.178.68.195:53370). Dec 12 18:43:05.230633 coreos-metadata[1509]: Dec 12 18:43:05.230 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:43:05.416191 coreos-metadata[1509]: Dec 12 18:43:05.416 INFO Fetch successful Dec 12 18:43:05.416446 coreos-metadata[1509]: Dec 12 18:43:05.416 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:43:05.563145 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 53370 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:05.565446 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:05.585191 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:43:05.585691 systemd-logind[1521]: New session 1 of user core. Dec 12 18:43:05.589849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:43:05.614546 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:43:05.620161 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:43:05.620952 coreos-metadata[1587]: Dec 12 18:43:05.620 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Dec 12 18:43:05.634450 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:43:05.638660 systemd-logind[1521]: New session c1 of user core. Dec 12 18:43:05.642323 kubelet[1658]: E1212 18:43:05.642115 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:43:05.645192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:43:05.645385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:43:05.645940 systemd[1]: kubelet.service: Consumed 907ms CPU time, 266.7M memory peak. Dec 12 18:43:05.690535 coreos-metadata[1509]: Dec 12 18:43:05.690 INFO Fetch successful Dec 12 18:43:05.718560 coreos-metadata[1587]: Dec 12 18:43:05.718 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:43:05.782289 systemd[1673]: Queued start job for default target default.target. Dec 12 18:43:05.784249 systemd[1673]: Created slice app.slice - User Application Slice. Dec 12 18:43:05.784651 systemd[1673]: Reached target paths.target - Paths. Dec 12 18:43:05.784752 systemd[1673]: Reached target timers.target - Timers. Dec 12 18:43:05.789563 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:43:05.804652 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:43:05.806657 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:43:05.811977 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:43:05.812088 systemd[1673]: Reached target sockets.target - Sockets. Dec 12 18:43:05.812133 systemd[1673]: Reached target basic.target - Basic System. Dec 12 18:43:05.812178 systemd[1673]: Reached target default.target - Main User Target. Dec 12 18:43:05.812210 systemd[1673]: Startup finished in 164ms. Dec 12 18:43:05.812246 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:43:05.820693 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:43:05.852073 coreos-metadata[1587]: Dec 12 18:43:05.852 INFO Fetch successful Dec 12 18:43:05.870907 update-ssh-keys[1703]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:43:05.871444 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:43:05.873837 systemd[1]: Finished sshkeys.service. Dec 12 18:43:05.876706 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:43:05.878061 systemd[1]: Startup finished in 2.925s (kernel) + 8.219s (initrd) + 7.068s (userspace) = 18.212s. Dec 12 18:43:06.077862 systemd[1]: Started sshd@1-172.238.172.51:22-139.178.68.195:53374.service - OpenSSH per-connection server daemon (139.178.68.195:53374). Dec 12 18:43:06.175762 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:06.435683 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 53374 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:06.436368 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:06.442402 systemd-logind[1521]: New session 2 of user core. Dec 12 18:43:06.447597 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:43:06.683019 sshd[1716]: Connection closed by 139.178.68.195 port 53374 Dec 12 18:43:06.683596 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:06.687825 systemd-logind[1521]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:43:06.688238 systemd[1]: sshd@1-172.238.172.51:22-139.178.68.195:53374.service: Deactivated successfully. Dec 12 18:43:06.690180 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:43:06.691992 systemd-logind[1521]: Removed session 2. Dec 12 18:43:06.745810 systemd[1]: Started sshd@2-172.238.172.51:22-139.178.68.195:53380.service - OpenSSH per-connection server daemon (139.178.68.195:53380). Dec 12 18:43:07.094192 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 53380 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:07.095621 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:07.101135 systemd-logind[1521]: New session 3 of user core. Dec 12 18:43:07.110630 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:43:07.346127 sshd[1725]: Connection closed by 139.178.68.195 port 53380 Dec 12 18:43:07.347099 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:07.351344 systemd[1]: sshd@2-172.238.172.51:22-139.178.68.195:53380.service: Deactivated successfully. Dec 12 18:43:07.353313 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:43:07.354713 systemd-logind[1521]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:43:07.356296 systemd-logind[1521]: Removed session 3. Dec 12 18:43:07.408956 systemd[1]: Started sshd@3-172.238.172.51:22-139.178.68.195:53396.service - OpenSSH per-connection server daemon (139.178.68.195:53396). Dec 12 18:43:07.776327 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 53396 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:07.778172 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:07.782831 systemd-logind[1521]: New session 4 of user core. Dec 12 18:43:07.789619 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:43:08.031219 sshd[1734]: Connection closed by 139.178.68.195 port 53396 Dec 12 18:43:08.031975 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:08.036092 systemd[1]: sshd@3-172.238.172.51:22-139.178.68.195:53396.service: Deactivated successfully. Dec 12 18:43:08.037856 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:43:08.039209 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:43:08.045120 systemd-logind[1521]: Removed session 4. Dec 12 18:43:08.092660 systemd[1]: Started sshd@4-172.238.172.51:22-139.178.68.195:53406.service - OpenSSH per-connection server daemon (139.178.68.195:53406). Dec 12 18:43:08.439055 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 53406 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:08.441261 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:08.448353 systemd-logind[1521]: New session 5 of user core. Dec 12 18:43:08.453645 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:43:08.644417 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:43:08.644896 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:08.662410 sudo[1744]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:08.713067 sshd[1743]: Connection closed by 139.178.68.195 port 53406 Dec 12 18:43:08.714162 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:08.718758 systemd[1]: sshd@4-172.238.172.51:22-139.178.68.195:53406.service: Deactivated successfully. Dec 12 18:43:08.720941 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:43:08.723714 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:43:08.725011 systemd-logind[1521]: Removed session 5. Dec 12 18:43:08.775302 systemd[1]: Started sshd@5-172.238.172.51:22-139.178.68.195:53408.service - OpenSSH per-connection server daemon (139.178.68.195:53408). Dec 12 18:43:09.138393 sshd[1750]: Accepted publickey for core from 139.178.68.195 port 53408 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:09.140974 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:09.149271 systemd-logind[1521]: New session 6 of user core. Dec 12 18:43:09.160620 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:43:09.342669 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:43:09.343025 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:09.347922 sudo[1755]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:09.353732 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:43:09.354044 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:09.363854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:43:09.403585 augenrules[1777]: No rules Dec 12 18:43:09.405943 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:43:09.406290 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:43:09.408004 sudo[1754]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:09.460569 sshd[1753]: Connection closed by 139.178.68.195 port 53408 Dec 12 18:43:09.461277 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:09.465864 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:43:09.466314 systemd[1]: sshd@5-172.238.172.51:22-139.178.68.195:53408.service: Deactivated successfully. Dec 12 18:43:09.468570 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:43:09.474467 systemd-logind[1521]: Removed session 6. Dec 12 18:43:09.536016 systemd[1]: Started sshd@6-172.238.172.51:22-139.178.68.195:53424.service - OpenSSH per-connection server daemon (139.178.68.195:53424). Dec 12 18:43:09.900886 sshd[1786]: Accepted publickey for core from 139.178.68.195 port 53424 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:43:09.902938 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:09.908435 systemd-logind[1521]: New session 7 of user core. Dec 12 18:43:09.914610 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:43:10.105971 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:43:10.106391 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:10.400819 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:43:10.415855 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:43:10.619112 dockerd[1809]: time="2025-12-12T18:43:10.618467324Z" level=info msg="Starting up" Dec 12 18:43:10.619782 dockerd[1809]: time="2025-12-12T18:43:10.619764865Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:43:10.632157 dockerd[1809]: time="2025-12-12T18:43:10.632120707Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:43:10.674096 dockerd[1809]: time="2025-12-12T18:43:10.673936959Z" level=info msg="Loading containers: start." Dec 12 18:43:10.686514 kernel: Initializing XFRM netlink socket Dec 12 18:43:10.898234 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:10.899225 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:10.911264 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:10.949435 systemd-networkd[1450]: docker0: Link UP Dec 12 18:43:10.950428 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Dec 12 18:43:10.952861 dockerd[1809]: time="2025-12-12T18:43:10.952802808Z" level=info msg="Loading containers: done." Dec 12 18:43:10.969018 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2287125176-merged.mount: Deactivated successfully. Dec 12 18:43:10.969270 dockerd[1809]: time="2025-12-12T18:43:10.969204264Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:43:10.969326 dockerd[1809]: time="2025-12-12T18:43:10.969269274Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:43:10.969360 dockerd[1809]: time="2025-12-12T18:43:10.969349875Z" level=info msg="Initializing buildkit" Dec 12 18:43:10.994869 dockerd[1809]: time="2025-12-12T18:43:10.994824490Z" level=info msg="Completed buildkit initialization" Dec 12 18:43:10.999017 dockerd[1809]: time="2025-12-12T18:43:10.998988194Z" level=info msg="Daemon has completed initialization" Dec 12 18:43:10.999177 dockerd[1809]: time="2025-12-12T18:43:10.999131004Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:43:10.999298 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:43:11.579786 containerd[1544]: time="2025-12-12T18:43:11.579733955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:43:12.288307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713459386.mount: Deactivated successfully. Dec 12 18:43:13.359932 containerd[1544]: time="2025-12-12T18:43:13.358842763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:13.359932 containerd[1544]: time="2025-12-12T18:43:13.359771184Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 12 18:43:13.359932 containerd[1544]: time="2025-12-12T18:43:13.359869635Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:13.363282 containerd[1544]: time="2025-12-12T18:43:13.362987768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:13.364258 containerd[1544]: time="2025-12-12T18:43:13.363735718Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.783950753s" Dec 12 18:43:13.364258 containerd[1544]: time="2025-12-12T18:43:13.363773378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:43:13.371046 containerd[1544]: time="2025-12-12T18:43:13.371025016Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:43:14.608641 containerd[1544]: time="2025-12-12T18:43:14.608572583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:14.609664 containerd[1544]: time="2025-12-12T18:43:14.609600764Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 12 18:43:14.610204 containerd[1544]: time="2025-12-12T18:43:14.610169635Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:14.612518 containerd[1544]: time="2025-12-12T18:43:14.612382927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:14.613683 containerd[1544]: time="2025-12-12T18:43:14.613630028Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.242516692s" Dec 12 18:43:14.613683 containerd[1544]: time="2025-12-12T18:43:14.613677358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:43:14.614316 containerd[1544]: time="2025-12-12T18:43:14.614279609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:43:15.700818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:43:15.702717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:15.804363 containerd[1544]: time="2025-12-12T18:43:15.804304098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:15.805780 containerd[1544]: time="2025-12-12T18:43:15.805755380Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 12 18:43:15.807984 containerd[1544]: time="2025-12-12T18:43:15.807944172Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:15.810495 containerd[1544]: time="2025-12-12T18:43:15.810413134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:15.812237 containerd[1544]: time="2025-12-12T18:43:15.812094396Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.197782757s" Dec 12 18:43:15.812237 containerd[1544]: time="2025-12-12T18:43:15.812126986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:43:15.812761 containerd[1544]: time="2025-12-12T18:43:15.812673187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:43:15.909714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:15.915937 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:43:15.959961 kubelet[2091]: E1212 18:43:15.959831 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:43:15.965798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:43:15.966001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:43:15.966462 systemd[1]: kubelet.service: Consumed 208ms CPU time, 108.9M memory peak. Dec 12 18:43:16.842196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141024380.mount: Deactivated successfully. Dec 12 18:43:17.265886 containerd[1544]: time="2025-12-12T18:43:17.265837310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:17.266952 containerd[1544]: time="2025-12-12T18:43:17.266605390Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 12 18:43:17.267696 containerd[1544]: time="2025-12-12T18:43:17.267652051Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:17.271259 containerd[1544]: time="2025-12-12T18:43:17.271225215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:17.271983 containerd[1544]: time="2025-12-12T18:43:17.271956776Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.459118669s" Dec 12 18:43:17.272082 containerd[1544]: time="2025-12-12T18:43:17.272058486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:43:17.273195 containerd[1544]: time="2025-12-12T18:43:17.273174347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:43:17.872761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549855312.mount: Deactivated successfully. Dec 12 18:43:18.589270 containerd[1544]: time="2025-12-12T18:43:18.589213953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:18.590236 containerd[1544]: time="2025-12-12T18:43:18.590208684Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 12 18:43:18.591180 containerd[1544]: time="2025-12-12T18:43:18.590700244Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:18.593232 containerd[1544]: time="2025-12-12T18:43:18.592742356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:18.594093 containerd[1544]: time="2025-12-12T18:43:18.594057398Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.32080809s" Dec 12 18:43:18.594093 containerd[1544]: time="2025-12-12T18:43:18.594089298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:43:18.595006 containerd[1544]: time="2025-12-12T18:43:18.594842028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:43:19.159999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258630330.mount: Deactivated successfully. Dec 12 18:43:19.165046 containerd[1544]: time="2025-12-12T18:43:19.164995718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:19.165935 containerd[1544]: time="2025-12-12T18:43:19.165905849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:43:19.167225 containerd[1544]: time="2025-12-12T18:43:19.166157029Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:19.168039 containerd[1544]: time="2025-12-12T18:43:19.167999711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:19.168869 containerd[1544]: time="2025-12-12T18:43:19.168840522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.964254ms" Dec 12 18:43:19.168963 containerd[1544]: time="2025-12-12T18:43:19.168946802Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:43:19.169823 containerd[1544]: time="2025-12-12T18:43:19.169782523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:43:19.787639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899697110.mount: Deactivated successfully. Dec 12 18:43:21.073416 containerd[1544]: time="2025-12-12T18:43:21.072506605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:21.073416 containerd[1544]: time="2025-12-12T18:43:21.073164376Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 12 18:43:21.073416 containerd[1544]: time="2025-12-12T18:43:21.073356166Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:21.075516 containerd[1544]: time="2025-12-12T18:43:21.075415388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:21.076503 containerd[1544]: time="2025-12-12T18:43:21.076270329Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.906448136s" Dec 12 18:43:21.076503 containerd[1544]: time="2025-12-12T18:43:21.076298169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:43:23.437574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:23.437729 systemd[1]: kubelet.service: Consumed 208ms CPU time, 108.9M memory peak. Dec 12 18:43:23.442671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:23.474609 systemd[1]: Reload requested from client PID 2246 ('systemctl') (unit session-7.scope)... Dec 12 18:43:23.474625 systemd[1]: Reloading... Dec 12 18:43:23.647513 zram_generator::config[2305]: No configuration found. Dec 12 18:43:23.849194 systemd[1]: Reloading finished in 374 ms. Dec 12 18:43:23.914822 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:43:23.914928 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:43:23.915283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:23.915345 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.3M memory peak. Dec 12 18:43:23.917233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:24.101654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:24.113114 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:43:24.157229 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:24.159544 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:43:24.159544 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:24.159544 kubelet[2345]: I1212 18:43:24.158610 2345 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:43:24.703391 kubelet[2345]: I1212 18:43:24.703330 2345 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:43:24.703391 kubelet[2345]: I1212 18:43:24.703368 2345 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:43:24.703653 kubelet[2345]: I1212 18:43:24.703623 2345 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:43:24.737177 kubelet[2345]: E1212 18:43:24.737117 2345 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.172.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.172.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:43:24.743556 kubelet[2345]: I1212 18:43:24.743265 2345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:43:24.751351 kubelet[2345]: I1212 18:43:24.751318 2345 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:43:24.757618 kubelet[2345]: I1212 18:43:24.757586 2345 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:43:24.757913 kubelet[2345]: I1212 18:43:24.757867 2345 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:43:24.758108 kubelet[2345]: I1212 18:43:24.757899 2345 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-172-51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:43:24.758108 kubelet[2345]: I1212 18:43:24.758102 2345 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:43:24.758108 kubelet[2345]: I1212 18:43:24.758113 2345 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:43:24.758284 kubelet[2345]: I1212 18:43:24.758260 2345 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:24.761575 kubelet[2345]: I1212 18:43:24.761320 2345 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:43:24.761575 kubelet[2345]: I1212 18:43:24.761349 2345 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:43:24.761575 kubelet[2345]: I1212 18:43:24.761377 2345 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:43:24.761575 kubelet[2345]: I1212 18:43:24.761394 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:43:24.767066 kubelet[2345]: E1212 18:43:24.766779 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.172.51:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-172-51&limit=500&resourceVersion=0\": dial tcp 172.238.172.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:43:24.769306 kubelet[2345]: E1212 18:43:24.768880 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.172.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.172.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:43:24.769525 kubelet[2345]: I1212 18:43:24.769505 2345 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:43:24.770072 kubelet[2345]: I1212 18:43:24.770056 2345 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:43:24.771764 kubelet[2345]: W1212 18:43:24.771716 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:43:24.776293 kubelet[2345]: I1212 18:43:24.776262 2345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:43:24.776347 kubelet[2345]: I1212 18:43:24.776325 2345 server.go:1289] "Started kubelet" Dec 12 18:43:24.778517 kubelet[2345]: I1212 18:43:24.777151 2345 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:43:24.778517 kubelet[2345]: I1212 18:43:24.778114 2345 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:43:24.780513 kubelet[2345]: I1212 18:43:24.780434 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:43:24.782157 kubelet[2345]: I1212 18:43:24.781542 2345 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:43:24.783213 kubelet[2345]: E1212 18:43:24.781778 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.172.51:6443/api/v1/namespaces/default/events\": dial tcp 172.238.172.51:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-172-51.18808c04798d24e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-172-51,UID:172-238-172-51,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-172-51,},FirstTimestamp:2025-12-12 18:43:24.776285408 +0000 UTC m=+0.658091459,LastTimestamp:2025-12-12 18:43:24.776285408 +0000 UTC m=+0.658091459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-172-51,}" Dec 12 18:43:24.786164 kubelet[2345]: I1212 18:43:24.786134 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:43:24.787141 kubelet[2345]: E1212 18:43:24.787122 2345 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:43:24.787679 kubelet[2345]: I1212 18:43:24.787663 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:43:24.791156 kubelet[2345]: E1212 18:43:24.791137 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-172-51\" not found" Dec 12 18:43:24.792145 kubelet[2345]: I1212 18:43:24.792128 2345 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:43:24.792397 kubelet[2345]: I1212 18:43:24.792383 2345 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:43:24.792533 kubelet[2345]: I1212 18:43:24.792522 2345 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:43:24.793142 kubelet[2345]: E1212 18:43:24.793121 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.172.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.172.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:43:24.793486 kubelet[2345]: I1212 18:43:24.793456 2345 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:43:24.793640 kubelet[2345]: I1212 18:43:24.793624 2345 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:43:24.797511 kubelet[2345]: E1212 18:43:24.797463 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.172.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-172-51?timeout=10s\": dial tcp 172.238.172.51:6443: connect: connection refused" interval="200ms" Dec 12 18:43:24.797762 kubelet[2345]: I1212 18:43:24.797748 2345 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:43:24.818704 kubelet[2345]: I1212 18:43:24.818125 2345 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:43:24.818704 kubelet[2345]: I1212 18:43:24.818145 2345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:43:24.818704 kubelet[2345]: I1212 18:43:24.818161 2345 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:24.821979 kubelet[2345]: I1212 18:43:24.821623 2345 policy_none.go:49] "None policy: Start" Dec 12 18:43:24.821979 kubelet[2345]: I1212 18:43:24.821658 2345 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:43:24.821979 kubelet[2345]: I1212 18:43:24.821674 2345 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:43:24.823501 kubelet[2345]: I1212 18:43:24.823418 2345 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:43:24.828700 kubelet[2345]: I1212 18:43:24.827315 2345 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:43:24.828700 kubelet[2345]: I1212 18:43:24.827373 2345 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:43:24.828700 kubelet[2345]: I1212 18:43:24.827399 2345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:43:24.828700 kubelet[2345]: I1212 18:43:24.827410 2345 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:43:24.828700 kubelet[2345]: E1212 18:43:24.827465 2345 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:43:24.829927 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:43:24.835653 kubelet[2345]: E1212 18:43:24.835117 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.172.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.172.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:43:24.853116 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:43:24.857498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:43:24.868524 kubelet[2345]: E1212 18:43:24.868425 2345 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:43:24.869132 kubelet[2345]: I1212 18:43:24.869100 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:43:24.870423 kubelet[2345]: I1212 18:43:24.870208 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:43:24.870807 kubelet[2345]: E1212 18:43:24.870779 2345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:43:24.870861 kubelet[2345]: I1212 18:43:24.870809 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:43:24.871204 kubelet[2345]: E1212 18:43:24.870961 2345 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-172-51\" not found" Dec 12 18:43:24.949637 systemd[1]: Created slice kubepods-burstable-podd8543af667a9d4acb8a76aa0bf9653d5.slice - libcontainer container kubepods-burstable-podd8543af667a9d4acb8a76aa0bf9653d5.slice. Dec 12 18:43:24.963867 kubelet[2345]: E1212 18:43:24.963764 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:24.967829 systemd[1]: Created slice kubepods-burstable-podc67acedc5572db7d4219d0122d34d8ea.slice - libcontainer container kubepods-burstable-podc67acedc5572db7d4219d0122d34d8ea.slice. Dec 12 18:43:24.971934 kubelet[2345]: E1212 18:43:24.971760 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:24.974521 kubelet[2345]: I1212 18:43:24.974373 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-238-172-51" Dec 12 18:43:24.975183 kubelet[2345]: E1212 18:43:24.974805 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.172.51:6443/api/v1/nodes\": dial tcp 172.238.172.51:6443: connect: connection refused" node="172-238-172-51" Dec 12 18:43:24.974964 systemd[1]: Created slice kubepods-burstable-pod594840c885076cc469966dd99b7a9b5a.slice - libcontainer container kubepods-burstable-pod594840c885076cc469966dd99b7a9b5a.slice. Dec 12 18:43:24.977512 kubelet[2345]: E1212 18:43:24.977443 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:24.998221 kubelet[2345]: E1212 18:43:24.998178 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.172.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-172-51?timeout=10s\": dial tcp 172.238.172.51:6443: connect: connection refused" interval="400ms" Dec 12 18:43:25.094650 kubelet[2345]: I1212 18:43:25.094587 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8543af667a9d4acb8a76aa0bf9653d5-ca-certs\") pod \"kube-apiserver-172-238-172-51\" (UID: \"d8543af667a9d4acb8a76aa0bf9653d5\") " pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:25.094650 kubelet[2345]: I1212 18:43:25.094633 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8543af667a9d4acb8a76aa0bf9653d5-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-172-51\" (UID: \"d8543af667a9d4acb8a76aa0bf9653d5\") " pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:25.094650 kubelet[2345]: I1212 18:43:25.094663 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-ca-certs\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:25.094882 kubelet[2345]: I1212 18:43:25.094680 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-kubeconfig\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:25.094882 kubelet[2345]: I1212 18:43:25.094698 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:25.094882 kubelet[2345]: I1212 18:43:25.094726 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/594840c885076cc469966dd99b7a9b5a-kubeconfig\") pod \"kube-scheduler-172-238-172-51\" (UID: \"594840c885076cc469966dd99b7a9b5a\") " pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:25.094882 kubelet[2345]: I1212 18:43:25.094750 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8543af667a9d4acb8a76aa0bf9653d5-k8s-certs\") pod \"kube-apiserver-172-238-172-51\" (UID: \"d8543af667a9d4acb8a76aa0bf9653d5\") " pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:25.094882 kubelet[2345]: I1212 18:43:25.094767 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-flexvolume-dir\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:25.095006 kubelet[2345]: I1212 18:43:25.094784 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-k8s-certs\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:25.176959 kubelet[2345]: I1212 18:43:25.176920 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-238-172-51" Dec 12 18:43:25.177573 kubelet[2345]: E1212 18:43:25.177256 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.172.51:6443/api/v1/nodes\": dial tcp 172.238.172.51:6443: connect: connection refused" node="172-238-172-51" Dec 12 18:43:25.264677 kubelet[2345]: E1212 18:43:25.264529 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.265424 containerd[1544]: time="2025-12-12T18:43:25.265264997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-172-51,Uid:d8543af667a9d4acb8a76aa0bf9653d5,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:25.273614 kubelet[2345]: E1212 18:43:25.273559 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.274094 containerd[1544]: time="2025-12-12T18:43:25.274043356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-172-51,Uid:c67acedc5572db7d4219d0122d34d8ea,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:25.278800 kubelet[2345]: E1212 18:43:25.278760 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.294967 containerd[1544]: time="2025-12-12T18:43:25.294646157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-172-51,Uid:594840c885076cc469966dd99b7a9b5a,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:25.296810 containerd[1544]: time="2025-12-12T18:43:25.296776389Z" level=info msg="connecting to shim 069254bdff4f0048e7c15e6b262c1c2f92d90da14d85fd33c2334537383825f6" address="unix:///run/containerd/s/dbb1e005df6abacacba857b83e921470d8deb4c68829e6839214d75f26737199" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:25.322503 containerd[1544]: time="2025-12-12T18:43:25.322427924Z" level=info msg="connecting to shim 28517e6869bc31f60580bf846f79856e25cae0ae498655c0dfc03157290fe05b" address="unix:///run/containerd/s/54425ada884239d7993c453b4d16e40f3768073fed6c17ae8b2223f0df2fe40b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:25.337815 systemd[1]: Started cri-containerd-069254bdff4f0048e7c15e6b262c1c2f92d90da14d85fd33c2334537383825f6.scope - libcontainer container 069254bdff4f0048e7c15e6b262c1c2f92d90da14d85fd33c2334537383825f6. Dec 12 18:43:25.344538 containerd[1544]: time="2025-12-12T18:43:25.344497496Z" level=info msg="connecting to shim 7799950f41db718d8d2ee60ec5aa522d09dee7e045620203d5fcc3b16cfe2901" address="unix:///run/containerd/s/9348183238f2aa7fecce8a8e744713b2420a9502ec248a76b5c05bd574ecb234" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:25.368747 systemd[1]: Started cri-containerd-28517e6869bc31f60580bf846f79856e25cae0ae498655c0dfc03157290fe05b.scope - libcontainer container 28517e6869bc31f60580bf846f79856e25cae0ae498655c0dfc03157290fe05b. Dec 12 18:43:25.375798 systemd[1]: Started cri-containerd-7799950f41db718d8d2ee60ec5aa522d09dee7e045620203d5fcc3b16cfe2901.scope - libcontainer container 7799950f41db718d8d2ee60ec5aa522d09dee7e045620203d5fcc3b16cfe2901. Dec 12 18:43:25.399641 kubelet[2345]: E1212 18:43:25.399554 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.172.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-172-51?timeout=10s\": dial tcp 172.238.172.51:6443: connect: connection refused" interval="800ms" Dec 12 18:43:25.427258 containerd[1544]: time="2025-12-12T18:43:25.427151139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-172-51,Uid:d8543af667a9d4acb8a76aa0bf9653d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"069254bdff4f0048e7c15e6b262c1c2f92d90da14d85fd33c2334537383825f6\"" Dec 12 18:43:25.429799 kubelet[2345]: E1212 18:43:25.429780 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.433407 containerd[1544]: time="2025-12-12T18:43:25.433359925Z" level=info msg="CreateContainer within sandbox \"069254bdff4f0048e7c15e6b262c1c2f92d90da14d85fd33c2334537383825f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:43:25.445360 containerd[1544]: time="2025-12-12T18:43:25.445335577Z" level=info msg="Container 4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:25.453629 containerd[1544]: time="2025-12-12T18:43:25.453607176Z" level=info msg="CreateContainer within sandbox \"069254bdff4f0048e7c15e6b262c1c2f92d90da14d85fd33c2334537383825f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524\"" Dec 12 18:43:25.454340 containerd[1544]: time="2025-12-12T18:43:25.454322536Z" level=info msg="StartContainer for \"4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524\"" Dec 12 18:43:25.455428 containerd[1544]: time="2025-12-12T18:43:25.455378607Z" level=info msg="connecting to shim 4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524" address="unix:///run/containerd/s/dbb1e005df6abacacba857b83e921470d8deb4c68829e6839214d75f26737199" protocol=ttrpc version=3 Dec 12 18:43:25.476818 systemd[1]: Started cri-containerd-4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524.scope - libcontainer container 4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524. Dec 12 18:43:25.477838 containerd[1544]: time="2025-12-12T18:43:25.477807210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-172-51,Uid:c67acedc5572db7d4219d0122d34d8ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"28517e6869bc31f60580bf846f79856e25cae0ae498655c0dfc03157290fe05b\"" Dec 12 18:43:25.480023 kubelet[2345]: E1212 18:43:25.479996 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.484955 containerd[1544]: time="2025-12-12T18:43:25.484875037Z" level=info msg="CreateContainer within sandbox \"28517e6869bc31f60580bf846f79856e25cae0ae498655c0dfc03157290fe05b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:43:25.494060 containerd[1544]: time="2025-12-12T18:43:25.494029816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-172-51,Uid:594840c885076cc469966dd99b7a9b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7799950f41db718d8d2ee60ec5aa522d09dee7e045620203d5fcc3b16cfe2901\"" Dec 12 18:43:25.495531 kubelet[2345]: E1212 18:43:25.495508 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.497767 containerd[1544]: time="2025-12-12T18:43:25.497604299Z" level=info msg="Container 39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:25.499835 containerd[1544]: time="2025-12-12T18:43:25.499758902Z" level=info msg="CreateContainer within sandbox \"7799950f41db718d8d2ee60ec5aa522d09dee7e045620203d5fcc3b16cfe2901\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:43:25.506028 containerd[1544]: time="2025-12-12T18:43:25.505982248Z" level=info msg="CreateContainer within sandbox \"28517e6869bc31f60580bf846f79856e25cae0ae498655c0dfc03157290fe05b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b\"" Dec 12 18:43:25.506605 containerd[1544]: time="2025-12-12T18:43:25.506582118Z" level=info msg="StartContainer for \"39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b\"" Dec 12 18:43:25.508499 containerd[1544]: time="2025-12-12T18:43:25.508264580Z" level=info msg="connecting to shim 39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b" address="unix:///run/containerd/s/54425ada884239d7993c453b4d16e40f3768073fed6c17ae8b2223f0df2fe40b" protocol=ttrpc version=3 Dec 12 18:43:25.509561 containerd[1544]: time="2025-12-12T18:43:25.509538761Z" level=info msg="Container 19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:25.517550 containerd[1544]: time="2025-12-12T18:43:25.516914929Z" level=info msg="CreateContainer within sandbox \"7799950f41db718d8d2ee60ec5aa522d09dee7e045620203d5fcc3b16cfe2901\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01\"" Dec 12 18:43:25.519314 containerd[1544]: time="2025-12-12T18:43:25.519289291Z" level=info msg="StartContainer for \"19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01\"" Dec 12 18:43:25.520210 containerd[1544]: time="2025-12-12T18:43:25.520176482Z" level=info msg="connecting to shim 19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01" address="unix:///run/containerd/s/9348183238f2aa7fecce8a8e744713b2420a9502ec248a76b5c05bd574ecb234" protocol=ttrpc version=3 Dec 12 18:43:25.543388 systemd[1]: Started cri-containerd-39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b.scope - libcontainer container 39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b. Dec 12 18:43:25.553772 systemd[1]: Started cri-containerd-19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01.scope - libcontainer container 19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01. Dec 12 18:43:25.583893 containerd[1544]: time="2025-12-12T18:43:25.583805576Z" level=info msg="StartContainer for \"4fabfdd66ef38fed09a86b8c58bebc74bb3ec20dd415ca2c5cf2e395bed07524\" returns successfully" Dec 12 18:43:25.584750 kubelet[2345]: I1212 18:43:25.584727 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-238-172-51" Dec 12 18:43:25.588503 kubelet[2345]: E1212 18:43:25.588298 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.172.51:6443/api/v1/nodes\": dial tcp 172.238.172.51:6443: connect: connection refused" node="172-238-172-51" Dec 12 18:43:25.627721 kubelet[2345]: E1212 18:43:25.627373 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.172.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.172.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:43:25.716534 containerd[1544]: time="2025-12-12T18:43:25.715260977Z" level=info msg="StartContainer for \"19b1a206d0c41155b4cce27e5f1af98698e405dde1acf797e5fc2909d290ed01\" returns successfully" Dec 12 18:43:25.734955 containerd[1544]: time="2025-12-12T18:43:25.734915867Z" level=info msg="StartContainer for \"39abddb27789c23e7dc26a2b6d65bb373539bdfafc0bc09b63f356b2801fd97b\" returns successfully" Dec 12 18:43:25.845769 kubelet[2345]: E1212 18:43:25.845662 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:25.847161 kubelet[2345]: E1212 18:43:25.847139 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.849029 kubelet[2345]: E1212 18:43:25.849008 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:25.849275 kubelet[2345]: E1212 18:43:25.849257 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:25.851650 kubelet[2345]: E1212 18:43:25.851631 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:25.851736 kubelet[2345]: E1212 18:43:25.851718 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:26.391166 kubelet[2345]: I1212 18:43:26.391103 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-238-172-51" Dec 12 18:43:26.855646 kubelet[2345]: E1212 18:43:26.855612 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:26.855768 kubelet[2345]: E1212 18:43:26.855741 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:26.856257 kubelet[2345]: E1212 18:43:26.856234 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:26.856353 kubelet[2345]: E1212 18:43:26.856335 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:27.001795 kubelet[2345]: E1212 18:43:27.001760 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:27.001932 kubelet[2345]: E1212 18:43:27.001913 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:27.219688 kubelet[2345]: E1212 18:43:27.219643 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-172-51\" not found" node="172-238-172-51" Dec 12 18:43:27.362624 kubelet[2345]: I1212 18:43:27.362575 2345 kubelet_node_status.go:78] "Successfully registered node" node="172-238-172-51" Dec 12 18:43:27.362624 kubelet[2345]: E1212 18:43:27.362620 2345 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-238-172-51\": node \"172-238-172-51\" not found" Dec 12 18:43:27.396879 kubelet[2345]: I1212 18:43:27.396790 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:27.404738 kubelet[2345]: E1212 18:43:27.404447 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-172-51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:27.404738 kubelet[2345]: I1212 18:43:27.404500 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:27.406380 kubelet[2345]: E1212 18:43:27.406331 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-172-51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:27.406438 kubelet[2345]: I1212 18:43:27.406383 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:27.409861 kubelet[2345]: E1212 18:43:27.409674 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-172-51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:27.770965 kubelet[2345]: I1212 18:43:27.770920 2345 apiserver.go:52] "Watching apiserver" Dec 12 18:43:27.793058 kubelet[2345]: I1212 18:43:27.793021 2345 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:43:27.853685 kubelet[2345]: I1212 18:43:27.853619 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:27.853685 kubelet[2345]: I1212 18:43:27.853638 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:27.856003 kubelet[2345]: E1212 18:43:27.855986 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-172-51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:27.856162 kubelet[2345]: E1212 18:43:27.856119 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:27.856202 kubelet[2345]: E1212 18:43:27.856191 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-172-51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:27.856272 kubelet[2345]: E1212 18:43:27.856260 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:28.855796 kubelet[2345]: I1212 18:43:28.855722 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:28.864593 kubelet[2345]: E1212 18:43:28.864550 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:29.567503 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-7.scope)... Dec 12 18:43:29.567881 systemd[1]: Reloading... Dec 12 18:43:29.692555 zram_generator::config[2675]: No configuration found. Dec 12 18:43:29.858602 kubelet[2345]: E1212 18:43:29.857864 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:29.918067 systemd[1]: Reloading finished in 349 ms. Dec 12 18:43:29.950703 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:29.956046 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:43:29.956794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:29.956853 systemd[1]: kubelet.service: Consumed 1.096s CPU time, 131.7M memory peak. Dec 12 18:43:29.959720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:30.164569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:30.174984 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:43:30.221020 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:30.221020 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:43:30.221020 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:30.221597 kubelet[2723]: I1212 18:43:30.221111 2723 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:43:30.230669 kubelet[2723]: I1212 18:43:30.230618 2723 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:43:30.230669 kubelet[2723]: I1212 18:43:30.230643 2723 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:43:30.230819 kubelet[2723]: I1212 18:43:30.230795 2723 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:43:30.231832 kubelet[2723]: I1212 18:43:30.231802 2723 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:43:30.234592 kubelet[2723]: I1212 18:43:30.233945 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:43:30.239314 kubelet[2723]: I1212 18:43:30.239290 2723 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:43:30.246103 kubelet[2723]: I1212 18:43:30.244678 2723 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:43:30.246103 kubelet[2723]: I1212 18:43:30.244904 2723 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:43:30.246103 kubelet[2723]: I1212 18:43:30.244928 2723 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-172-51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:43:30.246103 kubelet[2723]: I1212 18:43:30.245160 2723 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:43:30.246349 kubelet[2723]: I1212 18:43:30.245171 2723 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:43:30.246349 kubelet[2723]: I1212 18:43:30.245212 2723 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:30.246349 kubelet[2723]: I1212 18:43:30.245385 2723 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:43:30.246349 kubelet[2723]: I1212 18:43:30.245397 2723 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:43:30.246349 kubelet[2723]: I1212 18:43:30.245417 2723 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:43:30.246349 kubelet[2723]: I1212 18:43:30.245432 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:43:30.248108 kubelet[2723]: I1212 18:43:30.248085 2723 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:43:30.248719 kubelet[2723]: I1212 18:43:30.248685 2723 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:43:30.252470 kubelet[2723]: I1212 18:43:30.252411 2723 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:43:30.252629 kubelet[2723]: I1212 18:43:30.252613 2723 server.go:1289] "Started kubelet" Dec 12 18:43:30.255286 kubelet[2723]: I1212 18:43:30.255247 2723 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:43:30.255992 kubelet[2723]: I1212 18:43:30.255694 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:43:30.256120 kubelet[2723]: I1212 18:43:30.256084 2723 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:43:30.256828 kubelet[2723]: I1212 18:43:30.256456 2723 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:43:30.266496 kubelet[2723]: I1212 18:43:30.265871 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:43:30.273440 kubelet[2723]: E1212 18:43:30.273416 2723 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:43:30.274151 kubelet[2723]: I1212 18:43:30.274137 2723 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:43:30.274766 kubelet[2723]: I1212 18:43:30.274749 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:43:30.277156 kubelet[2723]: I1212 18:43:30.277140 2723 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:43:30.277326 kubelet[2723]: I1212 18:43:30.277316 2723 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:43:30.278652 kubelet[2723]: I1212 18:43:30.278636 2723 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:43:30.278825 kubelet[2723]: I1212 18:43:30.278802 2723 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:43:30.282401 kubelet[2723]: I1212 18:43:30.282360 2723 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:43:30.282645 kubelet[2723]: I1212 18:43:30.282591 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:43:30.283959 kubelet[2723]: I1212 18:43:30.283930 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:43:30.283959 kubelet[2723]: I1212 18:43:30.283954 2723 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:43:30.284042 kubelet[2723]: I1212 18:43:30.283976 2723 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:43:30.284042 kubelet[2723]: I1212 18:43:30.283985 2723 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:43:30.284042 kubelet[2723]: E1212 18:43:30.284033 2723 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:43:30.344406 kubelet[2723]: I1212 18:43:30.344363 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:43:30.344615 kubelet[2723]: I1212 18:43:30.344601 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:43:30.344676 kubelet[2723]: I1212 18:43:30.344668 2723 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:30.344859 kubelet[2723]: I1212 18:43:30.344838 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:43:30.344939 kubelet[2723]: I1212 18:43:30.344917 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:43:30.344987 kubelet[2723]: I1212 18:43:30.344978 2723 policy_none.go:49] "None policy: Start" Dec 12 18:43:30.345029 kubelet[2723]: I1212 18:43:30.345022 2723 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:43:30.345075 kubelet[2723]: I1212 18:43:30.345067 2723 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:43:30.345211 kubelet[2723]: I1212 18:43:30.345200 2723 state_mem.go:75] "Updated machine memory state" Dec 12 18:43:30.350737 kubelet[2723]: E1212 18:43:30.350700 2723 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:43:30.350959 kubelet[2723]: I1212 18:43:30.350931 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:43:30.351008 kubelet[2723]: I1212 18:43:30.350955 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:43:30.351216 kubelet[2723]: I1212 18:43:30.351195 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:43:30.353635 kubelet[2723]: E1212 18:43:30.353323 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:43:30.385605 kubelet[2723]: I1212 18:43:30.385572 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:30.385899 kubelet[2723]: I1212 18:43:30.385860 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:30.386183 kubelet[2723]: I1212 18:43:30.386151 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:30.393244 kubelet[2723]: E1212 18:43:30.393205 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-172-51\" already exists" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:30.457409 kubelet[2723]: I1212 18:43:30.457381 2723 kubelet_node_status.go:75] "Attempting to register node" node="172-238-172-51" Dec 12 18:43:30.465458 kubelet[2723]: I1212 18:43:30.465408 2723 kubelet_node_status.go:124] "Node was previously registered" node="172-238-172-51" Dec 12 18:43:30.465597 kubelet[2723]: I1212 18:43:30.465518 2723 kubelet_node_status.go:78] "Successfully registered node" node="172-238-172-51" Dec 12 18:43:30.478055 kubelet[2723]: I1212 18:43:30.478024 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-ca-certs\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:30.478055 kubelet[2723]: I1212 18:43:30.478057 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-flexvolume-dir\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:30.478055 kubelet[2723]: I1212 18:43:30.478079 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-kubeconfig\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:30.478320 kubelet[2723]: I1212 18:43:30.478098 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:30.478320 kubelet[2723]: I1212 18:43:30.478120 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/594840c885076cc469966dd99b7a9b5a-kubeconfig\") pod \"kube-scheduler-172-238-172-51\" (UID: \"594840c885076cc469966dd99b7a9b5a\") " pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:30.478320 kubelet[2723]: I1212 18:43:30.478136 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8543af667a9d4acb8a76aa0bf9653d5-k8s-certs\") pod \"kube-apiserver-172-238-172-51\" (UID: \"d8543af667a9d4acb8a76aa0bf9653d5\") " pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:30.478320 kubelet[2723]: I1212 18:43:30.478150 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c67acedc5572db7d4219d0122d34d8ea-k8s-certs\") pod \"kube-controller-manager-172-238-172-51\" (UID: \"c67acedc5572db7d4219d0122d34d8ea\") " pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:30.478320 kubelet[2723]: I1212 18:43:30.478169 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8543af667a9d4acb8a76aa0bf9653d5-ca-certs\") pod \"kube-apiserver-172-238-172-51\" (UID: \"d8543af667a9d4acb8a76aa0bf9653d5\") " pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:30.478436 kubelet[2723]: I1212 18:43:30.478185 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8543af667a9d4acb8a76aa0bf9653d5-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-172-51\" (UID: \"d8543af667a9d4acb8a76aa0bf9653d5\") " pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:30.562768 sudo[2759]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 18:43:30.563127 sudo[2759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 18:43:30.695687 kubelet[2723]: E1212 18:43:30.693659 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:30.695687 kubelet[2723]: E1212 18:43:30.693676 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:30.695687 kubelet[2723]: E1212 18:43:30.693900 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:30.890276 sudo[2759]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:31.247170 kubelet[2723]: I1212 18:43:31.247129 2723 apiserver.go:52] "Watching apiserver" Dec 12 18:43:31.278502 kubelet[2723]: I1212 18:43:31.277997 2723 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:43:31.322136 kubelet[2723]: I1212 18:43:31.322102 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:31.323331 kubelet[2723]: I1212 18:43:31.323307 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:31.323610 kubelet[2723]: I1212 18:43:31.323588 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:31.333291 kubelet[2723]: E1212 18:43:31.333263 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-172-51\" already exists" pod="kube-system/kube-scheduler-172-238-172-51" Dec 12 18:43:31.333437 kubelet[2723]: E1212 18:43:31.333411 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:31.335492 kubelet[2723]: E1212 18:43:31.335192 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-172-51\" already exists" pod="kube-system/kube-apiserver-172-238-172-51" Dec 12 18:43:31.335492 kubelet[2723]: E1212 18:43:31.335257 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-172-51\" already exists" pod="kube-system/kube-controller-manager-172-238-172-51" Dec 12 18:43:31.335492 kubelet[2723]: E1212 18:43:31.335363 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:31.336127 kubelet[2723]: E1212 18:43:31.336087 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:31.366742 kubelet[2723]: I1212 18:43:31.366676 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-172-51" podStartSLOduration=1.366660467 podStartE2EDuration="1.366660467s" podCreationTimestamp="2025-12-12 18:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:31.358549339 +0000 UTC m=+1.177083928" watchObservedRunningTime="2025-12-12 18:43:31.366660467 +0000 UTC m=+1.185195066" Dec 12 18:43:31.373497 kubelet[2723]: I1212 18:43:31.373297 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-172-51" podStartSLOduration=3.373287594 podStartE2EDuration="3.373287594s" podCreationTimestamp="2025-12-12 18:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:31.366999018 +0000 UTC m=+1.185533617" watchObservedRunningTime="2025-12-12 18:43:31.373287594 +0000 UTC m=+1.191822183" Dec 12 18:43:32.328459 kubelet[2723]: E1212 18:43:32.328302 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:32.330310 kubelet[2723]: E1212 18:43:32.329340 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:32.330310 kubelet[2723]: E1212 18:43:32.329594 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:32.525766 sudo[1790]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:32.578717 sshd[1789]: Connection closed by 139.178.68.195 port 53424 Dec 12 18:43:32.579434 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:32.584820 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:43:32.585864 systemd[1]: sshd@6-172.238.172.51:22-139.178.68.195:53424.service: Deactivated successfully. Dec 12 18:43:32.588926 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:43:32.589193 systemd[1]: session-7.scope: Consumed 4.323s CPU time, 273.9M memory peak. Dec 12 18:43:32.591839 systemd-logind[1521]: Removed session 7. Dec 12 18:43:33.552429 kubelet[2723]: E1212 18:43:33.552382 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:34.334869 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:43:35.862336 kubelet[2723]: I1212 18:43:35.862303 2723 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:43:35.863235 containerd[1544]: time="2025-12-12T18:43:35.863147423Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:43:35.863554 kubelet[2723]: I1212 18:43:35.863275 2723 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:43:36.918793 kubelet[2723]: I1212 18:43:36.918527 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-172-51" podStartSLOduration=6.9184678779999995 podStartE2EDuration="6.918467878s" podCreationTimestamp="2025-12-12 18:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:31.373464934 +0000 UTC m=+1.191999523" watchObservedRunningTime="2025-12-12 18:43:36.918467878 +0000 UTC m=+6.737002477" Dec 12 18:43:36.931772 systemd[1]: Created slice kubepods-burstable-podf2f04a4f_affe_4334_b273_55a29b910b13.slice - libcontainer container kubepods-burstable-podf2f04a4f_affe_4334_b273_55a29b910b13.slice. Dec 12 18:43:36.941760 systemd[1]: Created slice kubepods-besteffort-poded0caf09_2a20_4eab_829a_034afb97a53e.slice - libcontainer container kubepods-besteffort-poded0caf09_2a20_4eab_829a_034afb97a53e.slice. Dec 12 18:43:37.019781 kubelet[2723]: I1212 18:43:37.019731 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed0caf09-2a20-4eab-829a-034afb97a53e-xtables-lock\") pod \"kube-proxy-g7t55\" (UID: \"ed0caf09-2a20-4eab-829a-034afb97a53e\") " pod="kube-system/kube-proxy-g7t55" Dec 12 18:43:37.019781 kubelet[2723]: I1212 18:43:37.019764 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjdz\" (UniqueName: \"kubernetes.io/projected/ed0caf09-2a20-4eab-829a-034afb97a53e-kube-api-access-ktjdz\") pod \"kube-proxy-g7t55\" (UID: \"ed0caf09-2a20-4eab-829a-034afb97a53e\") " pod="kube-system/kube-proxy-g7t55" Dec 12 18:43:37.019781 kubelet[2723]: I1212 18:43:37.019786 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-xtables-lock\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020515 kubelet[2723]: I1212 18:43:37.019805 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-net\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020515 kubelet[2723]: I1212 18:43:37.019821 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-kernel\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020515 kubelet[2723]: I1212 18:43:37.019837 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-bpf-maps\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020515 kubelet[2723]: I1212 18:43:37.019853 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-hostproc\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020515 kubelet[2723]: I1212 18:43:37.019868 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-hubble-tls\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020515 kubelet[2723]: I1212 18:43:37.019897 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed0caf09-2a20-4eab-829a-034afb97a53e-lib-modules\") pod \"kube-proxy-g7t55\" (UID: \"ed0caf09-2a20-4eab-829a-034afb97a53e\") " pod="kube-system/kube-proxy-g7t55" Dec 12 18:43:37.020699 kubelet[2723]: I1212 18:43:37.019922 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-cgroup\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020699 kubelet[2723]: I1212 18:43:37.019958 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-run\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020699 kubelet[2723]: I1212 18:43:37.020202 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-etc-cni-netd\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020699 kubelet[2723]: I1212 18:43:37.020224 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879kt\" (UniqueName: \"kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-kube-api-access-879kt\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020699 kubelet[2723]: I1212 18:43:37.020241 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed0caf09-2a20-4eab-829a-034afb97a53e-kube-proxy\") pod \"kube-proxy-g7t55\" (UID: \"ed0caf09-2a20-4eab-829a-034afb97a53e\") " pod="kube-system/kube-proxy-g7t55" Dec 12 18:43:37.020699 kubelet[2723]: I1212 18:43:37.020256 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cni-path\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020843 kubelet[2723]: I1212 18:43:37.020273 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-lib-modules\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020843 kubelet[2723]: I1212 18:43:37.020290 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2f04a4f-affe-4334-b273-55a29b910b13-clustermesh-secrets\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.020843 kubelet[2723]: I1212 18:43:37.020306 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-config-path\") pod \"cilium-vtbts\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " pod="kube-system/cilium-vtbts" Dec 12 18:43:37.089924 systemd[1]: Created slice kubepods-besteffort-pod5f8e1773_c08a_4770_b8a3_6275e0a1781d.slice - libcontainer container kubepods-besteffort-pod5f8e1773_c08a_4770_b8a3_6275e0a1781d.slice. Dec 12 18:43:37.121308 kubelet[2723]: I1212 18:43:37.121279 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f8e1773-c08a-4770-b8a3-6275e0a1781d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tl7kw\" (UID: \"5f8e1773-c08a-4770-b8a3-6275e0a1781d\") " pod="kube-system/cilium-operator-6c4d7847fc-tl7kw" Dec 12 18:43:37.121616 kubelet[2723]: I1212 18:43:37.121593 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw28f\" (UniqueName: \"kubernetes.io/projected/5f8e1773-c08a-4770-b8a3-6275e0a1781d-kube-api-access-qw28f\") pod \"cilium-operator-6c4d7847fc-tl7kw\" (UID: \"5f8e1773-c08a-4770-b8a3-6275e0a1781d\") " pod="kube-system/cilium-operator-6c4d7847fc-tl7kw" Dec 12 18:43:37.237940 kubelet[2723]: E1212 18:43:37.237641 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:37.239432 containerd[1544]: time="2025-12-12T18:43:37.239385419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vtbts,Uid:f2f04a4f-affe-4334-b273-55a29b910b13,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:37.252497 kubelet[2723]: E1212 18:43:37.251674 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:37.254262 containerd[1544]: time="2025-12-12T18:43:37.253979783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7t55,Uid:ed0caf09-2a20-4eab-829a-034afb97a53e,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:37.256225 containerd[1544]: time="2025-12-12T18:43:37.256154385Z" level=info msg="connecting to shim 2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6" address="unix:///run/containerd/s/f4f0ed1ab4afef9c1bdb677a6028bef489e741bf3043c648f171886f9c5f122a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:37.276987 containerd[1544]: time="2025-12-12T18:43:37.276950896Z" level=info msg="connecting to shim 3dce51c9d5d62719048b24a071af6dba0a8a3fcf92781fd2c5391ea0e604c2dd" address="unix:///run/containerd/s/98815835d4224354147619454652f320f63beddd789ca295df5e42029d45274c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:37.278629 systemd[1]: Started cri-containerd-2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6.scope - libcontainer container 2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6. Dec 12 18:43:37.308637 systemd[1]: Started cri-containerd-3dce51c9d5d62719048b24a071af6dba0a8a3fcf92781fd2c5391ea0e604c2dd.scope - libcontainer container 3dce51c9d5d62719048b24a071af6dba0a8a3fcf92781fd2c5391ea0e604c2dd. Dec 12 18:43:37.315982 containerd[1544]: time="2025-12-12T18:43:37.315915025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vtbts,Uid:f2f04a4f-affe-4334-b273-55a29b910b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\"" Dec 12 18:43:37.316804 kubelet[2723]: E1212 18:43:37.316785 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:37.318432 containerd[1544]: time="2025-12-12T18:43:37.318387838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 18:43:37.344342 containerd[1544]: time="2025-12-12T18:43:37.344293283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7t55,Uid:ed0caf09-2a20-4eab-829a-034afb97a53e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dce51c9d5d62719048b24a071af6dba0a8a3fcf92781fd2c5391ea0e604c2dd\"" Dec 12 18:43:37.344866 kubelet[2723]: E1212 18:43:37.344849 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:37.348526 containerd[1544]: time="2025-12-12T18:43:37.348501328Z" level=info msg="CreateContainer within sandbox \"3dce51c9d5d62719048b24a071af6dba0a8a3fcf92781fd2c5391ea0e604c2dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:43:37.357635 containerd[1544]: time="2025-12-12T18:43:37.357604167Z" level=info msg="Container 41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:37.363045 containerd[1544]: time="2025-12-12T18:43:37.362977762Z" level=info msg="CreateContainer within sandbox \"3dce51c9d5d62719048b24a071af6dba0a8a3fcf92781fd2c5391ea0e604c2dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488\"" Dec 12 18:43:37.363804 containerd[1544]: time="2025-12-12T18:43:37.363506813Z" level=info msg="StartContainer for \"41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488\"" Dec 12 18:43:37.365076 containerd[1544]: time="2025-12-12T18:43:37.365018514Z" level=info msg="connecting to shim 41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488" address="unix:///run/containerd/s/98815835d4224354147619454652f320f63beddd789ca295df5e42029d45274c" protocol=ttrpc version=3 Dec 12 18:43:37.389686 systemd[1]: Started cri-containerd-41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488.scope - libcontainer container 41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488. Dec 12 18:43:37.394313 kubelet[2723]: E1212 18:43:37.394239 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:37.395669 containerd[1544]: time="2025-12-12T18:43:37.395630025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tl7kw,Uid:5f8e1773-c08a-4770-b8a3-6275e0a1781d,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:37.417305 containerd[1544]: time="2025-12-12T18:43:37.417250246Z" level=info msg="connecting to shim a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7" address="unix:///run/containerd/s/9045ec36349e2210b2f08b9fb055996635cd22aec19de506c1e4d40cb55c0743" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:37.443675 systemd[1]: Started cri-containerd-a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7.scope - libcontainer container a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7. Dec 12 18:43:37.496924 containerd[1544]: time="2025-12-12T18:43:37.496244195Z" level=info msg="StartContainer for \"41dbfbfb3b005489d1a7c8a8964b9768eb23fe99f6ccc7e494238d81be987488\" returns successfully" Dec 12 18:43:37.511187 kubelet[2723]: E1212 18:43:37.510596 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:37.532876 containerd[1544]: time="2025-12-12T18:43:37.532828152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tl7kw,Uid:5f8e1773-c08a-4770-b8a3-6275e0a1781d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\"" Dec 12 18:43:37.534601 kubelet[2723]: E1212 18:43:37.534567 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:38.345933 kubelet[2723]: E1212 18:43:38.345894 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:38.347097 kubelet[2723]: E1212 18:43:38.346994 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:38.362871 kubelet[2723]: I1212 18:43:38.362810 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7t55" podStartSLOduration=2.362794172 podStartE2EDuration="2.362794172s" podCreationTimestamp="2025-12-12 18:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:38.361589271 +0000 UTC m=+8.180123860" watchObservedRunningTime="2025-12-12 18:43:38.362794172 +0000 UTC m=+8.181328771" Dec 12 18:43:39.349417 kubelet[2723]: E1212 18:43:39.348928 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:40.805039 kubelet[2723]: E1212 18:43:40.804977 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:41.462212 systemd-timesyncd[1467]: Contacted time server [2607:f710:35::29c:0:2]:123 (2.flatcar.pool.ntp.org). Dec 12 18:43:41.462267 systemd-timesyncd[1467]: Initial clock synchronization to Fri 2025-12-12 18:43:41.417553 UTC. Dec 12 18:43:41.511016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573455523.mount: Deactivated successfully. Dec 12 18:43:42.956345 containerd[1544]: time="2025-12-12T18:43:42.955681141Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.956345 containerd[1544]: time="2025-12-12T18:43:42.956318019Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 12 18:43:42.956924 containerd[1544]: time="2025-12-12T18:43:42.956904208Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.957978 containerd[1544]: time="2025-12-12T18:43:42.957929424Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.639504436s" Dec 12 18:43:42.958051 containerd[1544]: time="2025-12-12T18:43:42.958034242Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 12 18:43:42.959923 containerd[1544]: time="2025-12-12T18:43:42.959901714Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 18:43:42.962768 containerd[1544]: time="2025-12-12T18:43:42.962745916Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:43:42.970311 containerd[1544]: time="2025-12-12T18:43:42.970277542Z" level=info msg="Container 3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:42.975710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301524805.mount: Deactivated successfully. Dec 12 18:43:42.980993 containerd[1544]: time="2025-12-12T18:43:42.980966081Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\"" Dec 12 18:43:42.982654 containerd[1544]: time="2025-12-12T18:43:42.982595515Z" level=info msg="StartContainer for \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\"" Dec 12 18:43:42.983398 containerd[1544]: time="2025-12-12T18:43:42.983372861Z" level=info msg="connecting to shim 3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0" address="unix:///run/containerd/s/f4f0ed1ab4afef9c1bdb677a6028bef489e741bf3043c648f171886f9c5f122a" protocol=ttrpc version=3 Dec 12 18:43:43.016650 systemd[1]: Started cri-containerd-3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0.scope - libcontainer container 3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0. Dec 12 18:43:43.050414 containerd[1544]: time="2025-12-12T18:43:43.050352762Z" level=info msg="StartContainer for \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\" returns successfully" Dec 12 18:43:43.063631 systemd[1]: cri-containerd-3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0.scope: Deactivated successfully. Dec 12 18:43:43.066453 containerd[1544]: time="2025-12-12T18:43:43.066425976Z" level=info msg="received container exit event container_id:\"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\" id:\"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\" pid:3147 exited_at:{seconds:1765565023 nanos:66084727}" Dec 12 18:43:43.085589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0-rootfs.mount: Deactivated successfully. Dec 12 18:43:43.357601 kubelet[2723]: E1212 18:43:43.357435 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:43.362758 containerd[1544]: time="2025-12-12T18:43:43.362730614Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:43:43.373034 containerd[1544]: time="2025-12-12T18:43:43.370700491Z" level=info msg="Container 403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:43.376000 containerd[1544]: time="2025-12-12T18:43:43.375977623Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\"" Dec 12 18:43:43.377967 containerd[1544]: time="2025-12-12T18:43:43.377946227Z" level=info msg="StartContainer for \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\"" Dec 12 18:43:43.380246 containerd[1544]: time="2025-12-12T18:43:43.380207098Z" level=info msg="connecting to shim 403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb" address="unix:///run/containerd/s/f4f0ed1ab4afef9c1bdb677a6028bef489e741bf3043c648f171886f9c5f122a" protocol=ttrpc version=3 Dec 12 18:43:43.397646 systemd[1]: Started cri-containerd-403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb.scope - libcontainer container 403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb. Dec 12 18:43:43.436402 containerd[1544]: time="2025-12-12T18:43:43.436359424Z" level=info msg="StartContainer for \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\" returns successfully" Dec 12 18:43:43.452993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:43:43.453208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:43:43.453670 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:43:43.457220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:43:43.457533 systemd[1]: cri-containerd-403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb.scope: Deactivated successfully. Dec 12 18:43:43.460983 containerd[1544]: time="2025-12-12T18:43:43.460792627Z" level=info msg="received container exit event container_id:\"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\" id:\"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\" pid:3193 exited_at:{seconds:1765565023 nanos:459897532}" Dec 12 18:43:43.481304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:43:43.557847 kubelet[2723]: E1212 18:43:43.557775 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:44.288559 containerd[1544]: time="2025-12-12T18:43:44.288506322Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:44.289335 containerd[1544]: time="2025-12-12T18:43:44.289248537Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 12 18:43:44.290003 containerd[1544]: time="2025-12-12T18:43:44.289853758Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:44.290933 containerd[1544]: time="2025-12-12T18:43:44.290904009Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.330894613s" Dec 12 18:43:44.290970 containerd[1544]: time="2025-12-12T18:43:44.290933926Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 12 18:43:44.295496 containerd[1544]: time="2025-12-12T18:43:44.295353587Z" level=info msg="CreateContainer within sandbox \"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 18:43:44.305010 containerd[1544]: time="2025-12-12T18:43:44.304988575Z" level=info msg="Container 6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:44.308976 containerd[1544]: time="2025-12-12T18:43:44.308943663Z" level=info msg="CreateContainer within sandbox \"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\"" Dec 12 18:43:44.312010 containerd[1544]: time="2025-12-12T18:43:44.311981484Z" level=info msg="StartContainer for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\"" Dec 12 18:43:44.313113 containerd[1544]: time="2025-12-12T18:43:44.313087180Z" level=info msg="connecting to shim 6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86" address="unix:///run/containerd/s/9045ec36349e2210b2f08b9fb055996635cd22aec19de506c1e4d40cb55c0743" protocol=ttrpc version=3 Dec 12 18:43:44.338618 systemd[1]: Started cri-containerd-6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86.scope - libcontainer container 6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86. Dec 12 18:43:44.363513 kubelet[2723]: E1212 18:43:44.363456 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:44.374850 containerd[1544]: time="2025-12-12T18:43:44.374806225Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:43:44.378316 containerd[1544]: time="2025-12-12T18:43:44.378240656Z" level=info msg="StartContainer for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" returns successfully" Dec 12 18:43:44.396318 containerd[1544]: time="2025-12-12T18:43:44.396208694Z" level=info msg="Container a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:44.403416 containerd[1544]: time="2025-12-12T18:43:44.403357093Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\"" Dec 12 18:43:44.404443 containerd[1544]: time="2025-12-12T18:43:44.404426358Z" level=info msg="StartContainer for \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\"" Dec 12 18:43:44.405991 containerd[1544]: time="2025-12-12T18:43:44.405971460Z" level=info msg="connecting to shim a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c" address="unix:///run/containerd/s/f4f0ed1ab4afef9c1bdb677a6028bef489e741bf3043c648f171886f9c5f122a" protocol=ttrpc version=3 Dec 12 18:43:44.430637 systemd[1]: Started cri-containerd-a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c.scope - libcontainer container a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c. Dec 12 18:43:44.508114 containerd[1544]: time="2025-12-12T18:43:44.508062444Z" level=info msg="StartContainer for \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\" returns successfully" Dec 12 18:43:44.531368 systemd[1]: cri-containerd-a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c.scope: Deactivated successfully. Dec 12 18:43:44.537664 containerd[1544]: time="2025-12-12T18:43:44.537591610Z" level=info msg="received container exit event container_id:\"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\" id:\"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\" pid:3290 exited_at:{seconds:1765565024 nanos:536934627}" Dec 12 18:43:44.971847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318258367.mount: Deactivated successfully. Dec 12 18:43:45.374408 kubelet[2723]: E1212 18:43:45.374299 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:45.379015 kubelet[2723]: E1212 18:43:45.378985 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:45.383912 kubelet[2723]: I1212 18:43:45.383417 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tl7kw" podStartSLOduration=1.627702397 podStartE2EDuration="8.383405764s" podCreationTimestamp="2025-12-12 18:43:37 +0000 UTC" firstStartedPulling="2025-12-12 18:43:37.536016965 +0000 UTC m=+7.354551554" lastFinishedPulling="2025-12-12 18:43:44.291720332 +0000 UTC m=+14.110254921" observedRunningTime="2025-12-12 18:43:45.383067489 +0000 UTC m=+15.201602088" watchObservedRunningTime="2025-12-12 18:43:45.383405764 +0000 UTC m=+15.201940353" Dec 12 18:43:45.385676 containerd[1544]: time="2025-12-12T18:43:45.385567150Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:43:45.397310 containerd[1544]: time="2025-12-12T18:43:45.397263767Z" level=info msg="Container 11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:45.404656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447677784.mount: Deactivated successfully. Dec 12 18:43:45.407022 containerd[1544]: time="2025-12-12T18:43:45.406995420Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\"" Dec 12 18:43:45.408379 containerd[1544]: time="2025-12-12T18:43:45.407615230Z" level=info msg="StartContainer for \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\"" Dec 12 18:43:45.408656 containerd[1544]: time="2025-12-12T18:43:45.408628437Z" level=info msg="connecting to shim 11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b" address="unix:///run/containerd/s/f4f0ed1ab4afef9c1bdb677a6028bef489e741bf3043c648f171886f9c5f122a" protocol=ttrpc version=3 Dec 12 18:43:45.436621 systemd[1]: Started cri-containerd-11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b.scope - libcontainer container 11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b. Dec 12 18:43:45.466267 systemd[1]: cri-containerd-11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b.scope: Deactivated successfully. Dec 12 18:43:45.468303 containerd[1544]: time="2025-12-12T18:43:45.468215677Z" level=info msg="received container exit event container_id:\"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\" id:\"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\" pid:3330 exited_at:{seconds:1765565025 nanos:467860532}" Dec 12 18:43:45.470086 containerd[1544]: time="2025-12-12T18:43:45.470064799Z" level=info msg="StartContainer for \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\" returns successfully" Dec 12 18:43:45.496256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b-rootfs.mount: Deactivated successfully. Dec 12 18:43:46.386713 kubelet[2723]: E1212 18:43:46.385916 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:46.386713 kubelet[2723]: E1212 18:43:46.386270 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:46.390209 containerd[1544]: time="2025-12-12T18:43:46.390167680Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:43:46.403305 containerd[1544]: time="2025-12-12T18:43:46.402917745Z" level=info msg="Container a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:46.412181 containerd[1544]: time="2025-12-12T18:43:46.412142224Z" level=info msg="CreateContainer within sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\"" Dec 12 18:43:46.413263 containerd[1544]: time="2025-12-12T18:43:46.413233270Z" level=info msg="StartContainer for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\"" Dec 12 18:43:46.414286 containerd[1544]: time="2025-12-12T18:43:46.414255015Z" level=info msg="connecting to shim a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454" address="unix:///run/containerd/s/f4f0ed1ab4afef9c1bdb677a6028bef489e741bf3043c648f171886f9c5f122a" protocol=ttrpc version=3 Dec 12 18:43:46.436607 systemd[1]: Started cri-containerd-a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454.scope - libcontainer container a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454. Dec 12 18:43:46.478517 containerd[1544]: time="2025-12-12T18:43:46.477823660Z" level=info msg="StartContainer for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" returns successfully" Dec 12 18:43:46.654512 kubelet[2723]: I1212 18:43:46.652818 2723 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:43:46.745588 systemd[1]: Created slice kubepods-burstable-pod00c534bc_00c4_4067_b6bb_997891905496.slice - libcontainer container kubepods-burstable-pod00c534bc_00c4_4067_b6bb_997891905496.slice. Dec 12 18:43:46.752670 kubelet[2723]: I1212 18:43:46.752626 2723 status_manager.go:895] "Failed to get status for pod" podUID="00c534bc-00c4-4067-b6bb-997891905496" pod="kube-system/coredns-674b8bbfcf-tjl6z" err="pods \"coredns-674b8bbfcf-tjl6z\" is forbidden: User \"system:node:172-238-172-51\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-172-51' and this object" Dec 12 18:43:46.762334 systemd[1]: Created slice kubepods-burstable-pod18788f02_bef9_495c_a7a6_8c180131193e.slice - libcontainer container kubepods-burstable-pod18788f02_bef9_495c_a7a6_8c180131193e.slice. Dec 12 18:43:46.793434 kubelet[2723]: I1212 18:43:46.793397 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18788f02-bef9-495c-a7a6-8c180131193e-config-volume\") pod \"coredns-674b8bbfcf-c7rn8\" (UID: \"18788f02-bef9-495c-a7a6-8c180131193e\") " pod="kube-system/coredns-674b8bbfcf-c7rn8" Dec 12 18:43:46.793434 kubelet[2723]: I1212 18:43:46.793432 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x998x\" (UniqueName: \"kubernetes.io/projected/00c534bc-00c4-4067-b6bb-997891905496-kube-api-access-x998x\") pod \"coredns-674b8bbfcf-tjl6z\" (UID: \"00c534bc-00c4-4067-b6bb-997891905496\") " pod="kube-system/coredns-674b8bbfcf-tjl6z" Dec 12 18:43:46.793684 kubelet[2723]: I1212 18:43:46.793450 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fts\" (UniqueName: \"kubernetes.io/projected/18788f02-bef9-495c-a7a6-8c180131193e-kube-api-access-z5fts\") pod \"coredns-674b8bbfcf-c7rn8\" (UID: \"18788f02-bef9-495c-a7a6-8c180131193e\") " pod="kube-system/coredns-674b8bbfcf-c7rn8" Dec 12 18:43:46.793684 kubelet[2723]: I1212 18:43:46.793464 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00c534bc-00c4-4067-b6bb-997891905496-config-volume\") pod \"coredns-674b8bbfcf-tjl6z\" (UID: \"00c534bc-00c4-4067-b6bb-997891905496\") " pod="kube-system/coredns-674b8bbfcf-tjl6z" Dec 12 18:43:47.050559 kubelet[2723]: E1212 18:43:47.049852 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:47.050675 containerd[1544]: time="2025-12-12T18:43:47.050365445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tjl6z,Uid:00c534bc-00c4-4067-b6bb-997891905496,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:47.067385 kubelet[2723]: E1212 18:43:47.067344 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:47.069269 containerd[1544]: time="2025-12-12T18:43:47.068950864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c7rn8,Uid:18788f02-bef9-495c-a7a6-8c180131193e,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:47.392740 kubelet[2723]: E1212 18:43:47.392637 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:47.411288 kubelet[2723]: I1212 18:43:47.411078 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vtbts" podStartSLOduration=5.769932049 podStartE2EDuration="11.411064268s" podCreationTimestamp="2025-12-12 18:43:36 +0000 UTC" firstStartedPulling="2025-12-12 18:43:37.318104037 +0000 UTC m=+7.136638636" lastFinishedPulling="2025-12-12 18:43:42.959236266 +0000 UTC m=+12.777770855" observedRunningTime="2025-12-12 18:43:47.410466862 +0000 UTC m=+17.229001450" watchObservedRunningTime="2025-12-12 18:43:47.411064268 +0000 UTC m=+17.229598857" Dec 12 18:43:47.666735 update_engine[1522]: I20251212 18:43:47.666556 1522 update_attempter.cc:509] Updating boot flags... Dec 12 18:43:48.393458 kubelet[2723]: E1212 18:43:48.393422 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:48.800677 systemd-networkd[1450]: cilium_host: Link UP Dec 12 18:43:48.801698 systemd-networkd[1450]: cilium_net: Link UP Dec 12 18:43:48.802891 systemd-networkd[1450]: cilium_net: Gained carrier Dec 12 18:43:48.804310 systemd-networkd[1450]: cilium_host: Gained carrier Dec 12 18:43:48.931824 systemd-networkd[1450]: cilium_vxlan: Link UP Dec 12 18:43:48.931839 systemd-networkd[1450]: cilium_vxlan: Gained carrier Dec 12 18:43:49.139534 kernel: NET: Registered PF_ALG protocol family Dec 12 18:43:49.191569 systemd-networkd[1450]: cilium_net: Gained IPv6LL Dec 12 18:43:49.395549 kubelet[2723]: E1212 18:43:49.395454 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:49.567222 systemd-networkd[1450]: cilium_host: Gained IPv6LL Dec 12 18:43:49.793020 systemd-networkd[1450]: lxc_health: Link UP Dec 12 18:43:49.793399 systemd-networkd[1450]: lxc_health: Gained carrier Dec 12 18:43:50.088862 kernel: eth0: renamed from tmpb7d36 Dec 12 18:43:50.088182 systemd-networkd[1450]: lxc8af1c92acdb6: Link UP Dec 12 18:43:50.090782 systemd-networkd[1450]: lxc8af1c92acdb6: Gained carrier Dec 12 18:43:50.108734 systemd-networkd[1450]: lxc74d37485971f: Link UP Dec 12 18:43:50.116811 kernel: eth0: renamed from tmpabbe1 Dec 12 18:43:50.120694 systemd-networkd[1450]: lxc74d37485971f: Gained carrier Dec 12 18:43:50.718711 systemd-networkd[1450]: cilium_vxlan: Gained IPv6LL Dec 12 18:43:50.866752 kubelet[2723]: E1212 18:43:50.863746 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:51.038679 systemd-networkd[1450]: lxc_health: Gained IPv6LL Dec 12 18:43:51.398667 kubelet[2723]: E1212 18:43:51.398557 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:51.422679 systemd-networkd[1450]: lxc8af1c92acdb6: Gained IPv6LL Dec 12 18:43:52.126858 systemd-networkd[1450]: lxc74d37485971f: Gained IPv6LL Dec 12 18:43:52.401978 kubelet[2723]: E1212 18:43:52.400989 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:53.376190 containerd[1544]: time="2025-12-12T18:43:53.375799418Z" level=info msg="connecting to shim b7d3668874ed44b61c3789efc81368695250744baf4c4f6c7ee9d8b09ff786eb" address="unix:///run/containerd/s/e604cc25f5d57dd78a965f549e25f32ae147a3777d94912eeb4a290a7621e7bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:53.390087 containerd[1544]: time="2025-12-12T18:43:53.390048621Z" level=info msg="connecting to shim abbe1185f813c76958a7b7da51d3f30bf298887d479fa30924e673d8489342cb" address="unix:///run/containerd/s/80a35759a77293a3b9fdd1ea6b09a82dc8ad3eb77f547542b365c2dbc12c01ce" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:53.429631 systemd[1]: Started cri-containerd-abbe1185f813c76958a7b7da51d3f30bf298887d479fa30924e673d8489342cb.scope - libcontainer container abbe1185f813c76958a7b7da51d3f30bf298887d479fa30924e673d8489342cb. Dec 12 18:43:53.436141 systemd[1]: Started cri-containerd-b7d3668874ed44b61c3789efc81368695250744baf4c4f6c7ee9d8b09ff786eb.scope - libcontainer container b7d3668874ed44b61c3789efc81368695250744baf4c4f6c7ee9d8b09ff786eb. Dec 12 18:43:53.513927 containerd[1544]: time="2025-12-12T18:43:53.513882224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c7rn8,Uid:18788f02-bef9-495c-a7a6-8c180131193e,Namespace:kube-system,Attempt:0,} returns sandbox id \"abbe1185f813c76958a7b7da51d3f30bf298887d479fa30924e673d8489342cb\"" Dec 12 18:43:53.516178 kubelet[2723]: E1212 18:43:53.516130 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:53.521640 containerd[1544]: time="2025-12-12T18:43:53.520964378Z" level=info msg="CreateContainer within sandbox \"abbe1185f813c76958a7b7da51d3f30bf298887d479fa30924e673d8489342cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:43:53.536341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount244803530.mount: Deactivated successfully. Dec 12 18:43:53.536883 containerd[1544]: time="2025-12-12T18:43:53.536860502Z" level=info msg="Container a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:53.550502 containerd[1544]: time="2025-12-12T18:43:53.550327054Z" level=info msg="CreateContainer within sandbox \"abbe1185f813c76958a7b7da51d3f30bf298887d479fa30924e673d8489342cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43\"" Dec 12 18:43:53.554640 containerd[1544]: time="2025-12-12T18:43:53.554620199Z" level=info msg="StartContainer for \"a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43\"" Dec 12 18:43:53.556295 containerd[1544]: time="2025-12-12T18:43:53.556255425Z" level=info msg="connecting to shim a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43" address="unix:///run/containerd/s/80a35759a77293a3b9fdd1ea6b09a82dc8ad3eb77f547542b365c2dbc12c01ce" protocol=ttrpc version=3 Dec 12 18:43:53.570261 containerd[1544]: time="2025-12-12T18:43:53.570199954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tjl6z,Uid:00c534bc-00c4-4067-b6bb-997891905496,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d3668874ed44b61c3789efc81368695250744baf4c4f6c7ee9d8b09ff786eb\"" Dec 12 18:43:53.571199 kubelet[2723]: E1212 18:43:53.571177 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:53.575673 containerd[1544]: time="2025-12-12T18:43:53.575642529Z" level=info msg="CreateContainer within sandbox \"b7d3668874ed44b61c3789efc81368695250744baf4c4f6c7ee9d8b09ff786eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:43:53.582676 containerd[1544]: time="2025-12-12T18:43:53.582642726Z" level=info msg="Container 8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:53.586547 containerd[1544]: time="2025-12-12T18:43:53.586514136Z" level=info msg="CreateContainer within sandbox \"b7d3668874ed44b61c3789efc81368695250744baf4c4f6c7ee9d8b09ff786eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36\"" Dec 12 18:43:53.588439 containerd[1544]: time="2025-12-12T18:43:53.587253795Z" level=info msg="StartContainer for \"8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36\"" Dec 12 18:43:53.588439 containerd[1544]: time="2025-12-12T18:43:53.588145797Z" level=info msg="connecting to shim 8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36" address="unix:///run/containerd/s/e604cc25f5d57dd78a965f549e25f32ae147a3777d94912eeb4a290a7621e7bf" protocol=ttrpc version=3 Dec 12 18:43:53.592631 systemd[1]: Started cri-containerd-a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43.scope - libcontainer container a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43. Dec 12 18:43:53.618667 systemd[1]: Started cri-containerd-8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36.scope - libcontainer container 8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36. Dec 12 18:43:53.654106 containerd[1544]: time="2025-12-12T18:43:53.652716842Z" level=info msg="StartContainer for \"a61411e79d50449971e8518957ad0bd4693d402486f687fbcfebc12824436f43\" returns successfully" Dec 12 18:43:53.674951 containerd[1544]: time="2025-12-12T18:43:53.674918451Z" level=info msg="StartContainer for \"8b64d8f8dd876fbf180a8c410ef0d2c9473a3193da607914d3538645582d2a36\" returns successfully" Dec 12 18:43:54.359932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108196389.mount: Deactivated successfully. Dec 12 18:43:54.410257 kubelet[2723]: E1212 18:43:54.410212 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:54.414301 kubelet[2723]: E1212 18:43:54.414217 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:54.428173 kubelet[2723]: I1212 18:43:54.428130 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c7rn8" podStartSLOduration=17.428117601 podStartE2EDuration="17.428117601s" podCreationTimestamp="2025-12-12 18:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:54.427595619 +0000 UTC m=+24.246130218" watchObservedRunningTime="2025-12-12 18:43:54.428117601 +0000 UTC m=+24.246652190" Dec 12 18:43:54.457365 kubelet[2723]: I1212 18:43:54.457309 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tjl6z" podStartSLOduration=17.457294956 podStartE2EDuration="17.457294956s" podCreationTimestamp="2025-12-12 18:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:54.456247178 +0000 UTC m=+24.274781777" watchObservedRunningTime="2025-12-12 18:43:54.457294956 +0000 UTC m=+24.275829565" Dec 12 18:43:55.416248 kubelet[2723]: E1212 18:43:55.416119 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:55.416248 kubelet[2723]: E1212 18:43:55.416119 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:56.418754 kubelet[2723]: E1212 18:43:56.417559 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:43:56.418754 kubelet[2723]: E1212 18:43:56.417814 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:44:51.285013 kubelet[2723]: E1212 18:44:51.284893 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:44:53.284750 kubelet[2723]: E1212 18:44:53.284715 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:44:56.290494 kubelet[2723]: E1212 18:44:56.290128 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:00.285528 kubelet[2723]: E1212 18:45:00.285102 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:03.284764 kubelet[2723]: E1212 18:45:03.284679 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:05.285660 kubelet[2723]: E1212 18:45:05.285629 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:09.284858 kubelet[2723]: E1212 18:45:09.284727 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:09.284858 kubelet[2723]: E1212 18:45:09.284733 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:25.117814 systemd[1]: Started sshd@7-172.238.172.51:22-139.178.68.195:46598.service - OpenSSH per-connection server daemon (139.178.68.195:46598). Dec 12 18:45:25.464559 sshd[4057]: Accepted publickey for core from 139.178.68.195 port 46598 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:25.466627 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:25.473096 systemd-logind[1521]: New session 8 of user core. Dec 12 18:45:25.478625 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:45:25.801550 sshd[4060]: Connection closed by 139.178.68.195 port 46598 Dec 12 18:45:25.802596 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:25.806505 systemd[1]: sshd@7-172.238.172.51:22-139.178.68.195:46598.service: Deactivated successfully. Dec 12 18:45:25.810150 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:45:25.813139 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:45:25.814517 systemd-logind[1521]: Removed session 8. Dec 12 18:45:30.869874 systemd[1]: Started sshd@8-172.238.172.51:22-139.178.68.195:59352.service - OpenSSH per-connection server daemon (139.178.68.195:59352). Dec 12 18:45:31.221369 sshd[4075]: Accepted publickey for core from 139.178.68.195 port 59352 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:31.222839 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:31.228327 systemd-logind[1521]: New session 9 of user core. Dec 12 18:45:31.231616 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:45:31.533008 sshd[4078]: Connection closed by 139.178.68.195 port 59352 Dec 12 18:45:31.533907 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:31.541206 systemd[1]: sshd@8-172.238.172.51:22-139.178.68.195:59352.service: Deactivated successfully. Dec 12 18:45:31.543444 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:45:31.544773 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:45:31.546588 systemd-logind[1521]: Removed session 9. Dec 12 18:45:36.595583 systemd[1]: Started sshd@9-172.238.172.51:22-139.178.68.195:59364.service - OpenSSH per-connection server daemon (139.178.68.195:59364). Dec 12 18:45:36.935664 sshd[4091]: Accepted publickey for core from 139.178.68.195 port 59364 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:36.937554 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:36.943551 systemd-logind[1521]: New session 10 of user core. Dec 12 18:45:36.948606 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:45:37.235976 sshd[4094]: Connection closed by 139.178.68.195 port 59364 Dec 12 18:45:37.236683 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:37.241067 systemd[1]: sshd@9-172.238.172.51:22-139.178.68.195:59364.service: Deactivated successfully. Dec 12 18:45:37.243312 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:45:37.244537 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:45:37.249564 systemd-logind[1521]: Removed session 10. Dec 12 18:45:42.304097 systemd[1]: Started sshd@10-172.238.172.51:22-139.178.68.195:37648.service - OpenSSH per-connection server daemon (139.178.68.195:37648). Dec 12 18:45:42.646205 sshd[4108]: Accepted publickey for core from 139.178.68.195 port 37648 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:42.648143 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:42.654501 systemd-logind[1521]: New session 11 of user core. Dec 12 18:45:42.660905 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:45:42.951223 sshd[4111]: Connection closed by 139.178.68.195 port 37648 Dec 12 18:45:42.951679 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:42.956125 systemd[1]: sshd@10-172.238.172.51:22-139.178.68.195:37648.service: Deactivated successfully. Dec 12 18:45:42.958243 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:45:42.959252 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:45:42.960842 systemd-logind[1521]: Removed session 11. Dec 12 18:45:43.014540 systemd[1]: Started sshd@11-172.238.172.51:22-139.178.68.195:37654.service - OpenSSH per-connection server daemon (139.178.68.195:37654). Dec 12 18:45:43.378458 sshd[4123]: Accepted publickey for core from 139.178.68.195 port 37654 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:43.380152 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:43.385717 systemd-logind[1521]: New session 12 of user core. Dec 12 18:45:43.391613 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:45:43.721652 sshd[4126]: Connection closed by 139.178.68.195 port 37654 Dec 12 18:45:43.722452 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:43.727191 systemd[1]: sshd@11-172.238.172.51:22-139.178.68.195:37654.service: Deactivated successfully. Dec 12 18:45:43.729674 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:45:43.730734 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:45:43.732358 systemd-logind[1521]: Removed session 12. Dec 12 18:45:43.784916 systemd[1]: Started sshd@12-172.238.172.51:22-139.178.68.195:37670.service - OpenSSH per-connection server daemon (139.178.68.195:37670). Dec 12 18:45:44.135814 sshd[4136]: Accepted publickey for core from 139.178.68.195 port 37670 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:44.137829 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:44.143375 systemd-logind[1521]: New session 13 of user core. Dec 12 18:45:44.150692 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:45:44.447161 sshd[4139]: Connection closed by 139.178.68.195 port 37670 Dec 12 18:45:44.448195 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:44.455015 systemd[1]: sshd@12-172.238.172.51:22-139.178.68.195:37670.service: Deactivated successfully. Dec 12 18:45:44.458453 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:45:44.459704 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:45:44.461127 systemd-logind[1521]: Removed session 13. Dec 12 18:45:49.508722 systemd[1]: Started sshd@13-172.238.172.51:22-139.178.68.195:37684.service - OpenSSH per-connection server daemon (139.178.68.195:37684). Dec 12 18:45:49.839595 sshd[4151]: Accepted publickey for core from 139.178.68.195 port 37684 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:49.840733 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:49.845552 systemd-logind[1521]: New session 14 of user core. Dec 12 18:45:49.858677 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:45:50.140496 sshd[4154]: Connection closed by 139.178.68.195 port 37684 Dec 12 18:45:50.141693 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:50.146538 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:45:50.146883 systemd[1]: sshd@13-172.238.172.51:22-139.178.68.195:37684.service: Deactivated successfully. Dec 12 18:45:50.149929 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:45:50.152193 systemd-logind[1521]: Removed session 14. Dec 12 18:45:50.204818 systemd[1]: Started sshd@14-172.238.172.51:22-139.178.68.195:42750.service - OpenSSH per-connection server daemon (139.178.68.195:42750). Dec 12 18:45:50.559353 sshd[4166]: Accepted publickey for core from 139.178.68.195 port 42750 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:50.560886 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:50.565960 systemd-logind[1521]: New session 15 of user core. Dec 12 18:45:50.570601 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:45:50.879673 sshd[4169]: Connection closed by 139.178.68.195 port 42750 Dec 12 18:45:50.880434 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:50.884169 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:45:50.884930 systemd[1]: sshd@14-172.238.172.51:22-139.178.68.195:42750.service: Deactivated successfully. Dec 12 18:45:50.887035 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:45:50.888933 systemd-logind[1521]: Removed session 15. Dec 12 18:45:50.951628 systemd[1]: Started sshd@15-172.238.172.51:22-139.178.68.195:42756.service - OpenSSH per-connection server daemon (139.178.68.195:42756). Dec 12 18:45:51.294542 sshd[4179]: Accepted publickey for core from 139.178.68.195 port 42756 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:51.296128 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:51.301311 systemd-logind[1521]: New session 16 of user core. Dec 12 18:45:51.307627 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:45:52.027692 sshd[4182]: Connection closed by 139.178.68.195 port 42756 Dec 12 18:45:52.028516 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:52.033006 systemd[1]: sshd@15-172.238.172.51:22-139.178.68.195:42756.service: Deactivated successfully. Dec 12 18:45:52.035259 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:45:52.036879 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:45:52.038501 systemd-logind[1521]: Removed session 16. Dec 12 18:45:52.103600 systemd[1]: Started sshd@16-172.238.172.51:22-139.178.68.195:42764.service - OpenSSH per-connection server daemon (139.178.68.195:42764). Dec 12 18:45:52.455332 sshd[4199]: Accepted publickey for core from 139.178.68.195 port 42764 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:52.457150 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:52.466187 systemd-logind[1521]: New session 17 of user core. Dec 12 18:45:52.472608 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:45:52.868435 sshd[4202]: Connection closed by 139.178.68.195 port 42764 Dec 12 18:45:52.869159 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:52.874218 systemd[1]: sshd@16-172.238.172.51:22-139.178.68.195:42764.service: Deactivated successfully. Dec 12 18:45:52.876750 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:45:52.877644 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:45:52.879286 systemd-logind[1521]: Removed session 17. Dec 12 18:45:52.928921 systemd[1]: Started sshd@17-172.238.172.51:22-139.178.68.195:42766.service - OpenSSH per-connection server daemon (139.178.68.195:42766). Dec 12 18:45:53.276873 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 42766 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:53.278793 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:53.284012 systemd-logind[1521]: New session 18 of user core. Dec 12 18:45:53.295651 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:45:53.574151 sshd[4215]: Connection closed by 139.178.68.195 port 42766 Dec 12 18:45:53.574888 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:53.578948 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:45:53.579175 systemd[1]: sshd@17-172.238.172.51:22-139.178.68.195:42766.service: Deactivated successfully. Dec 12 18:45:53.581273 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:45:53.583285 systemd-logind[1521]: Removed session 18. Dec 12 18:45:55.285183 kubelet[2723]: E1212 18:45:55.285093 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:45:58.638789 systemd[1]: Started sshd@18-172.238.172.51:22-139.178.68.195:42770.service - OpenSSH per-connection server daemon (139.178.68.195:42770). Dec 12 18:45:58.979299 sshd[4231]: Accepted publickey for core from 139.178.68.195 port 42770 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:58.981212 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:58.989792 systemd-logind[1521]: New session 19 of user core. Dec 12 18:45:58.994686 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:45:59.276207 sshd[4234]: Connection closed by 139.178.68.195 port 42770 Dec 12 18:45:59.277021 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:59.281704 systemd[1]: sshd@18-172.238.172.51:22-139.178.68.195:42770.service: Deactivated successfully. Dec 12 18:45:59.283942 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:45:59.285757 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:45:59.288081 systemd-logind[1521]: Removed session 19. Dec 12 18:46:04.341640 systemd[1]: Started sshd@19-172.238.172.51:22-139.178.68.195:45068.service - OpenSSH per-connection server daemon (139.178.68.195:45068). Dec 12 18:46:04.684842 sshd[4246]: Accepted publickey for core from 139.178.68.195 port 45068 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:04.686312 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:04.691720 systemd-logind[1521]: New session 20 of user core. Dec 12 18:46:04.695618 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:46:04.988704 sshd[4249]: Connection closed by 139.178.68.195 port 45068 Dec 12 18:46:04.989339 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:04.993975 systemd[1]: sshd@19-172.238.172.51:22-139.178.68.195:45068.service: Deactivated successfully. Dec 12 18:46:04.996673 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:46:04.997901 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:46:05.000177 systemd-logind[1521]: Removed session 20. Dec 12 18:46:05.053784 systemd[1]: Started sshd@20-172.238.172.51:22-139.178.68.195:45070.service - OpenSSH per-connection server daemon (139.178.68.195:45070). Dec 12 18:46:05.398811 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 45070 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:05.400196 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:05.404675 systemd-logind[1521]: New session 21 of user core. Dec 12 18:46:05.410624 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:46:06.972591 containerd[1544]: time="2025-12-12T18:46:06.972228699Z" level=info msg="StopContainer for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" with timeout 30 (s)" Dec 12 18:46:06.974577 containerd[1544]: time="2025-12-12T18:46:06.974514264Z" level=info msg="Stop container \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" with signal terminated" Dec 12 18:46:07.017245 systemd[1]: cri-containerd-6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86.scope: Deactivated successfully. Dec 12 18:46:07.024849 containerd[1544]: time="2025-12-12T18:46:07.024719511Z" level=info msg="received container exit event container_id:\"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" id:\"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" pid:3258 exited_at:{seconds:1765565167 nanos:20537370}" Dec 12 18:46:07.030741 containerd[1544]: time="2025-12-12T18:46:07.030692197Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:46:07.041648 containerd[1544]: time="2025-12-12T18:46:07.041584823Z" level=info msg="StopContainer for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" with timeout 2 (s)" Dec 12 18:46:07.042016 containerd[1544]: time="2025-12-12T18:46:07.042000523Z" level=info msg="Stop container \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" with signal terminated" Dec 12 18:46:07.050542 systemd-networkd[1450]: lxc_health: Link DOWN Dec 12 18:46:07.050550 systemd-networkd[1450]: lxc_health: Lost carrier Dec 12 18:46:07.066420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86-rootfs.mount: Deactivated successfully. Dec 12 18:46:07.078315 systemd[1]: cri-containerd-a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454.scope: Deactivated successfully. Dec 12 18:46:07.078706 systemd[1]: cri-containerd-a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454.scope: Consumed 6.354s CPU time, 125.3M memory peak, 128K read from disk, 13.3M written to disk. Dec 12 18:46:07.081770 containerd[1544]: time="2025-12-12T18:46:07.081737844Z" level=info msg="received container exit event container_id:\"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" id:\"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" pid:3366 exited_at:{seconds:1765565167 nanos:81406355}" Dec 12 18:46:07.086432 containerd[1544]: time="2025-12-12T18:46:07.086303724Z" level=info msg="StopContainer for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" returns successfully" Dec 12 18:46:07.087444 containerd[1544]: time="2025-12-12T18:46:07.087423481Z" level=info msg="StopPodSandbox for \"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\"" Dec 12 18:46:07.087672 containerd[1544]: time="2025-12-12T18:46:07.087633681Z" level=info msg="Container to stop \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:46:07.101072 systemd[1]: cri-containerd-a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7.scope: Deactivated successfully. Dec 12 18:46:07.106099 containerd[1544]: time="2025-12-12T18:46:07.106032400Z" level=info msg="received sandbox exit event container_id:\"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" id:\"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" exit_status:137 exited_at:{seconds:1765565167 nanos:105145052}" monitor_name=podsandbox Dec 12 18:46:07.121264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454-rootfs.mount: Deactivated successfully. Dec 12 18:46:07.131433 containerd[1544]: time="2025-12-12T18:46:07.131320374Z" level=info msg="StopContainer for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" returns successfully" Dec 12 18:46:07.132900 containerd[1544]: time="2025-12-12T18:46:07.132865991Z" level=info msg="StopPodSandbox for \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\"" Dec 12 18:46:07.132950 containerd[1544]: time="2025-12-12T18:46:07.132915961Z" level=info msg="Container to stop \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:46:07.132950 containerd[1544]: time="2025-12-12T18:46:07.132926401Z" level=info msg="Container to stop \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:46:07.132950 containerd[1544]: time="2025-12-12T18:46:07.132936641Z" level=info msg="Container to stop \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:46:07.133036 containerd[1544]: time="2025-12-12T18:46:07.132951500Z" level=info msg="Container to stop \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:46:07.133036 containerd[1544]: time="2025-12-12T18:46:07.132964080Z" level=info msg="Container to stop \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:46:07.142067 systemd[1]: cri-containerd-2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6.scope: Deactivated successfully. Dec 12 18:46:07.144973 containerd[1544]: time="2025-12-12T18:46:07.144928544Z" level=info msg="received sandbox exit event container_id:\"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" id:\"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" exit_status:137 exited_at:{seconds:1765565167 nanos:144739673}" monitor_name=podsandbox Dec 12 18:46:07.150708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7-rootfs.mount: Deactivated successfully. Dec 12 18:46:07.155003 containerd[1544]: time="2025-12-12T18:46:07.154334753Z" level=info msg="shim disconnected" id=a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7 namespace=k8s.io Dec 12 18:46:07.155153 containerd[1544]: time="2025-12-12T18:46:07.155137841Z" level=warning msg="cleaning up after shim disconnected" id=a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7 namespace=k8s.io Dec 12 18:46:07.155540 containerd[1544]: time="2025-12-12T18:46:07.155408390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:46:07.171747 containerd[1544]: time="2025-12-12T18:46:07.171718163Z" level=info msg="TearDown network for sandbox \"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" successfully" Dec 12 18:46:07.171878 containerd[1544]: time="2025-12-12T18:46:07.171862354Z" level=info msg="StopPodSandbox for \"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" returns successfully" Dec 12 18:46:07.172628 containerd[1544]: time="2025-12-12T18:46:07.171975074Z" level=info msg="received sandbox container exit event sandbox_id:\"a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7\" exit_status:137 exited_at:{seconds:1765565167 nanos:105145052}" monitor_name=criService Dec 12 18:46:07.172853 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a45067c7eed5681248180b47feba76cb019ac0e6e0e480e7d18c7863937c77e7-shm.mount: Deactivated successfully. Dec 12 18:46:07.184714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6-rootfs.mount: Deactivated successfully. Dec 12 18:46:07.190796 containerd[1544]: time="2025-12-12T18:46:07.190627371Z" level=info msg="shim disconnected" id=2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6 namespace=k8s.io Dec 12 18:46:07.190796 containerd[1544]: time="2025-12-12T18:46:07.190653351Z" level=warning msg="cleaning up after shim disconnected" id=2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6 namespace=k8s.io Dec 12 18:46:07.190796 containerd[1544]: time="2025-12-12T18:46:07.190660651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:46:07.207759 containerd[1544]: time="2025-12-12T18:46:07.207654073Z" level=info msg="received sandbox container exit event sandbox_id:\"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" exit_status:137 exited_at:{seconds:1765565167 nanos:144739673}" monitor_name=criService Dec 12 18:46:07.207999 containerd[1544]: time="2025-12-12T18:46:07.207895294Z" level=info msg="TearDown network for sandbox \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" successfully" Dec 12 18:46:07.207999 containerd[1544]: time="2025-12-12T18:46:07.207914694Z" level=info msg="StopPodSandbox for \"2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6\" returns successfully" Dec 12 18:46:07.314571 kubelet[2723]: I1212 18:46:07.314006 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2f04a4f-affe-4334-b273-55a29b910b13-clustermesh-secrets\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.314571 kubelet[2723]: I1212 18:46:07.314044 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-hubble-tls\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.314571 kubelet[2723]: I1212 18:46:07.314064 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-kernel\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.314571 kubelet[2723]: I1212 18:46:07.314083 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-bpf-maps\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.314571 kubelet[2723]: I1212 18:46:07.314097 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-hostproc\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.314571 kubelet[2723]: I1212 18:46:07.314113 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-cgroup\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315091 kubelet[2723]: I1212 18:46:07.314127 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-etc-cni-netd\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315091 kubelet[2723]: I1212 18:46:07.314148 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-config-path\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315091 kubelet[2723]: I1212 18:46:07.314165 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f8e1773-c08a-4770-b8a3-6275e0a1781d-cilium-config-path\") pod \"5f8e1773-c08a-4770-b8a3-6275e0a1781d\" (UID: \"5f8e1773-c08a-4770-b8a3-6275e0a1781d\") " Dec 12 18:46:07.315091 kubelet[2723]: I1212 18:46:07.314180 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-net\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315091 kubelet[2723]: I1212 18:46:07.314195 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-lib-modules\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315091 kubelet[2723]: I1212 18:46:07.314211 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw28f\" (UniqueName: \"kubernetes.io/projected/5f8e1773-c08a-4770-b8a3-6275e0a1781d-kube-api-access-qw28f\") pod \"5f8e1773-c08a-4770-b8a3-6275e0a1781d\" (UID: \"5f8e1773-c08a-4770-b8a3-6275e0a1781d\") " Dec 12 18:46:07.315425 kubelet[2723]: I1212 18:46:07.314230 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879kt\" (UniqueName: \"kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-kube-api-access-879kt\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315425 kubelet[2723]: I1212 18:46:07.314246 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cni-path\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315425 kubelet[2723]: I1212 18:46:07.314262 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-xtables-lock\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315425 kubelet[2723]: I1212 18:46:07.314279 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-run\") pod \"f2f04a4f-affe-4334-b273-55a29b910b13\" (UID: \"f2f04a4f-affe-4334-b273-55a29b910b13\") " Dec 12 18:46:07.315425 kubelet[2723]: I1212 18:46:07.314321 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321529 kubelet[2723]: I1212 18:46:07.320658 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321529 kubelet[2723]: I1212 18:46:07.320707 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321835 kubelet[2723]: I1212 18:46:07.321810 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321872 kubelet[2723]: I1212 18:46:07.321854 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321900 kubelet[2723]: I1212 18:46:07.321871 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321900 kubelet[2723]: I1212 18:46:07.321885 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.321900 kubelet[2723]: I1212 18:46:07.321898 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.323467 kubelet[2723]: I1212 18:46:07.323427 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.323467 kubelet[2723]: I1212 18:46:07.323461 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:46:07.323853 kubelet[2723]: I1212 18:46:07.323831 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:46:07.326023 kubelet[2723]: I1212 18:46:07.325984 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2f04a4f-affe-4334-b273-55a29b910b13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:46:07.329446 kubelet[2723]: I1212 18:46:07.329419 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8e1773-c08a-4770-b8a3-6275e0a1781d-kube-api-access-qw28f" (OuterVolumeSpecName: "kube-api-access-qw28f") pod "5f8e1773-c08a-4770-b8a3-6275e0a1781d" (UID: "5f8e1773-c08a-4770-b8a3-6275e0a1781d"). InnerVolumeSpecName "kube-api-access-qw28f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:46:07.331118 kubelet[2723]: I1212 18:46:07.331099 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:46:07.332566 kubelet[2723]: I1212 18:46:07.332540 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8e1773-c08a-4770-b8a3-6275e0a1781d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f8e1773-c08a-4770-b8a3-6275e0a1781d" (UID: "5f8e1773-c08a-4770-b8a3-6275e0a1781d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:46:07.333983 kubelet[2723]: I1212 18:46:07.333947 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-kube-api-access-879kt" (OuterVolumeSpecName: "kube-api-access-879kt") pod "f2f04a4f-affe-4334-b273-55a29b910b13" (UID: "f2f04a4f-affe-4334-b273-55a29b910b13"). InnerVolumeSpecName "kube-api-access-879kt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414420 2723 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2f04a4f-affe-4334-b273-55a29b910b13-clustermesh-secrets\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414447 2723 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-hubble-tls\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414460 2723 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-kernel\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414471 2723 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-bpf-maps\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414504 2723 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-hostproc\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414513 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-cgroup\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414494 kubelet[2723]: I1212 18:46:07.414522 2723 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-etc-cni-netd\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414533 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-config-path\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414545 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f8e1773-c08a-4770-b8a3-6275e0a1781d-cilium-config-path\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414554 2723 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-host-proc-sys-net\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414563 2723 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-lib-modules\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414572 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qw28f\" (UniqueName: \"kubernetes.io/projected/5f8e1773-c08a-4770-b8a3-6275e0a1781d-kube-api-access-qw28f\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414581 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-879kt\" (UniqueName: \"kubernetes.io/projected/f2f04a4f-affe-4334-b273-55a29b910b13-kube-api-access-879kt\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414597 2723 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cni-path\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.414825 kubelet[2723]: I1212 18:46:07.414605 2723 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-xtables-lock\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.415084 kubelet[2723]: I1212 18:46:07.414615 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2f04a4f-affe-4334-b273-55a29b910b13-cilium-run\") on node \"172-238-172-51\" DevicePath \"\"" Dec 12 18:46:07.659800 kubelet[2723]: I1212 18:46:07.657900 2723 scope.go:117] "RemoveContainer" containerID="a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454" Dec 12 18:46:07.664086 containerd[1544]: time="2025-12-12T18:46:07.664058328Z" level=info msg="RemoveContainer for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\"" Dec 12 18:46:07.673541 systemd[1]: Removed slice kubepods-burstable-podf2f04a4f_affe_4334_b273_55a29b910b13.slice - libcontainer container kubepods-burstable-podf2f04a4f_affe_4334_b273_55a29b910b13.slice. Dec 12 18:46:07.674444 containerd[1544]: time="2025-12-12T18:46:07.673829715Z" level=info msg="RemoveContainer for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" returns successfully" Dec 12 18:46:07.673637 systemd[1]: kubepods-burstable-podf2f04a4f_affe_4334_b273_55a29b910b13.slice: Consumed 6.464s CPU time, 125.8M memory peak, 128K read from disk, 13.3M written to disk. Dec 12 18:46:07.676886 kubelet[2723]: I1212 18:46:07.676830 2723 scope.go:117] "RemoveContainer" containerID="11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b" Dec 12 18:46:07.680716 containerd[1544]: time="2025-12-12T18:46:07.680618870Z" level=info msg="RemoveContainer for \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\"" Dec 12 18:46:07.687793 containerd[1544]: time="2025-12-12T18:46:07.687740704Z" level=info msg="RemoveContainer for \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\" returns successfully" Dec 12 18:46:07.690159 kubelet[2723]: I1212 18:46:07.690124 2723 scope.go:117] "RemoveContainer" containerID="a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c" Dec 12 18:46:07.692548 systemd[1]: Removed slice kubepods-besteffort-pod5f8e1773_c08a_4770_b8a3_6275e0a1781d.slice - libcontainer container kubepods-besteffort-pod5f8e1773_c08a_4770_b8a3_6275e0a1781d.slice. Dec 12 18:46:07.696697 containerd[1544]: time="2025-12-12T18:46:07.696656934Z" level=info msg="RemoveContainer for \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\"" Dec 12 18:46:07.703460 containerd[1544]: time="2025-12-12T18:46:07.703426700Z" level=info msg="RemoveContainer for \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\" returns successfully" Dec 12 18:46:07.703658 kubelet[2723]: I1212 18:46:07.703640 2723 scope.go:117] "RemoveContainer" containerID="403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb" Dec 12 18:46:07.706595 containerd[1544]: time="2025-12-12T18:46:07.706547313Z" level=info msg="RemoveContainer for \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\"" Dec 12 18:46:07.711456 containerd[1544]: time="2025-12-12T18:46:07.711340512Z" level=info msg="RemoveContainer for \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\" returns successfully" Dec 12 18:46:07.711972 kubelet[2723]: I1212 18:46:07.711942 2723 scope.go:117] "RemoveContainer" containerID="3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0" Dec 12 18:46:07.716346 containerd[1544]: time="2025-12-12T18:46:07.716070992Z" level=info msg="RemoveContainer for \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\"" Dec 12 18:46:07.720757 containerd[1544]: time="2025-12-12T18:46:07.720733341Z" level=info msg="RemoveContainer for \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\" returns successfully" Dec 12 18:46:07.721130 kubelet[2723]: I1212 18:46:07.721113 2723 scope.go:117] "RemoveContainer" containerID="a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454" Dec 12 18:46:07.721589 containerd[1544]: time="2025-12-12T18:46:07.721543409Z" level=error msg="ContainerStatus for \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\": not found" Dec 12 18:46:07.721808 kubelet[2723]: E1212 18:46:07.721783 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\": not found" containerID="a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454" Dec 12 18:46:07.723026 kubelet[2723]: I1212 18:46:07.722582 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454"} err="failed to get container status \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7d38a80445e0ec06fa5388276ebea9606b74b8f79b2e1f287fc7a3dec7ba454\": not found" Dec 12 18:46:07.723026 kubelet[2723]: I1212 18:46:07.722659 2723 scope.go:117] "RemoveContainer" containerID="11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b" Dec 12 18:46:07.723282 containerd[1544]: time="2025-12-12T18:46:07.723250406Z" level=error msg="ContainerStatus for \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\": not found" Dec 12 18:46:07.723572 kubelet[2723]: E1212 18:46:07.723528 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\": not found" containerID="11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b" Dec 12 18:46:07.723572 kubelet[2723]: I1212 18:46:07.723558 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b"} err="failed to get container status \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\": rpc error: code = NotFound desc = an error occurred when try to find container \"11899e501338017be9dfa38e62220a56df00050abdcc80bd38f7d2b095f6690b\": not found" Dec 12 18:46:07.723685 kubelet[2723]: I1212 18:46:07.723610 2723 scope.go:117] "RemoveContainer" containerID="a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c" Dec 12 18:46:07.723981 containerd[1544]: time="2025-12-12T18:46:07.723954364Z" level=error msg="ContainerStatus for \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\": not found" Dec 12 18:46:07.724596 kubelet[2723]: E1212 18:46:07.724431 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\": not found" containerID="a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c" Dec 12 18:46:07.724596 kubelet[2723]: I1212 18:46:07.724525 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c"} err="failed to get container status \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a320fd29eb53489c00448752055ab989e68eac767b77fdeefe946b0d1919650c\": not found" Dec 12 18:46:07.724596 kubelet[2723]: I1212 18:46:07.724545 2723 scope.go:117] "RemoveContainer" containerID="403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb" Dec 12 18:46:07.725457 containerd[1544]: time="2025-12-12T18:46:07.725411091Z" level=error msg="ContainerStatus for \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\": not found" Dec 12 18:46:07.726079 kubelet[2723]: E1212 18:46:07.725822 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\": not found" containerID="403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb" Dec 12 18:46:07.726079 kubelet[2723]: I1212 18:46:07.725854 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb"} err="failed to get container status \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"403aa15d4a8e9c720f6a98563767a064ab0cb7d36b8fd5ce98ebe0cdda72a4eb\": not found" Dec 12 18:46:07.726079 kubelet[2723]: I1212 18:46:07.725875 2723 scope.go:117] "RemoveContainer" containerID="3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0" Dec 12 18:46:07.726190 containerd[1544]: time="2025-12-12T18:46:07.726020490Z" level=error msg="ContainerStatus for \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\": not found" Dec 12 18:46:07.726523 kubelet[2723]: E1212 18:46:07.726455 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\": not found" containerID="3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0" Dec 12 18:46:07.726801 kubelet[2723]: I1212 18:46:07.726717 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0"} err="failed to get container status \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3af701b8728bb66523af8e743bb03c4cd9c4146cb5c4448472cfe590f0013ee0\": not found" Dec 12 18:46:07.726801 kubelet[2723]: I1212 18:46:07.726741 2723 scope.go:117] "RemoveContainer" containerID="6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86" Dec 12 18:46:07.729579 containerd[1544]: time="2025-12-12T18:46:07.729431552Z" level=info msg="RemoveContainer for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\"" Dec 12 18:46:07.733465 containerd[1544]: time="2025-12-12T18:46:07.733398013Z" level=info msg="RemoveContainer for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" returns successfully" Dec 12 18:46:07.733771 kubelet[2723]: I1212 18:46:07.733738 2723 scope.go:117] "RemoveContainer" containerID="6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86" Dec 12 18:46:07.734055 containerd[1544]: time="2025-12-12T18:46:07.734031342Z" level=error msg="ContainerStatus for \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\": not found" Dec 12 18:46:07.734257 kubelet[2723]: E1212 18:46:07.734209 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\": not found" containerID="6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86" Dec 12 18:46:07.735558 kubelet[2723]: I1212 18:46:07.735533 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86"} err="failed to get container status \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dae80705ead8ea5d19a2785a35bbd8926dd74c60584610727b5751fb014fe86\": not found" Dec 12 18:46:08.065745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ace079d8a61075e206a60f64bda2f1e892411a70e19467db4f956d0803294e6-shm.mount: Deactivated successfully. Dec 12 18:46:08.065854 systemd[1]: var-lib-kubelet-pods-5f8e1773\x2dc08a\x2d4770\x2db8a3\x2d6275e0a1781d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqw28f.mount: Deactivated successfully. Dec 12 18:46:08.065938 systemd[1]: var-lib-kubelet-pods-f2f04a4f\x2daffe\x2d4334\x2db273\x2d55a29b910b13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d879kt.mount: Deactivated successfully. Dec 12 18:46:08.066007 systemd[1]: var-lib-kubelet-pods-f2f04a4f\x2daffe\x2d4334\x2db273\x2d55a29b910b13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 18:46:08.066073 systemd[1]: var-lib-kubelet-pods-f2f04a4f\x2daffe\x2d4334\x2db273\x2d55a29b910b13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 18:46:08.287306 kubelet[2723]: I1212 18:46:08.287259 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8e1773-c08a-4770-b8a3-6275e0a1781d" path="/var/lib/kubelet/pods/5f8e1773-c08a-4770-b8a3-6275e0a1781d/volumes" Dec 12 18:46:08.287852 kubelet[2723]: I1212 18:46:08.287836 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f04a4f-affe-4334-b273-55a29b910b13" path="/var/lib/kubelet/pods/f2f04a4f-affe-4334-b273-55a29b910b13/volumes" Dec 12 18:46:08.981072 sshd[4264]: Connection closed by 139.178.68.195 port 45070 Dec 12 18:46:08.981692 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:08.986659 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:46:08.986881 systemd[1]: sshd@20-172.238.172.51:22-139.178.68.195:45070.service: Deactivated successfully. Dec 12 18:46:08.989077 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:46:08.991033 systemd-logind[1521]: Removed session 21. Dec 12 18:46:09.045694 systemd[1]: Started sshd@21-172.238.172.51:22-139.178.68.195:45084.service - OpenSSH per-connection server daemon (139.178.68.195:45084). Dec 12 18:46:09.408340 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 45084 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:09.409890 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:09.416064 systemd-logind[1521]: New session 22 of user core. Dec 12 18:46:09.430657 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:46:10.139232 systemd[1]: Created slice kubepods-burstable-podb9176076_fd63_49a0_8027_23c9f9a6e67a.slice - libcontainer container kubepods-burstable-podb9176076_fd63_49a0_8027_23c9f9a6e67a.slice. Dec 12 18:46:10.159636 sshd[4417]: Connection closed by 139.178.68.195 port 45084 Dec 12 18:46:10.163680 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:10.169702 systemd[1]: sshd@21-172.238.172.51:22-139.178.68.195:45084.service: Deactivated successfully. Dec 12 18:46:10.172676 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:46:10.175677 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:46:10.178993 systemd-logind[1521]: Removed session 22. Dec 12 18:46:10.225935 systemd[1]: Started sshd@22-172.238.172.51:22-139.178.68.195:36446.service - OpenSSH per-connection server daemon (139.178.68.195:36446). Dec 12 18:46:10.231517 kubelet[2723]: I1212 18:46:10.231457 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-cilium-cgroup\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232060 kubelet[2723]: I1212 18:46:10.231614 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spsb\" (UniqueName: \"kubernetes.io/projected/b9176076-fd63-49a0-8027-23c9f9a6e67a-kube-api-access-5spsb\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232060 kubelet[2723]: I1212 18:46:10.231639 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-cilium-run\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232060 kubelet[2723]: I1212 18:46:10.231691 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-bpf-maps\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232060 kubelet[2723]: I1212 18:46:10.231712 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9176076-fd63-49a0-8027-23c9f9a6e67a-clustermesh-secrets\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232060 kubelet[2723]: I1212 18:46:10.231752 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-cni-path\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232060 kubelet[2723]: I1212 18:46:10.231792 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-etc-cni-netd\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232287 kubelet[2723]: I1212 18:46:10.231841 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-host-proc-sys-net\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232287 kubelet[2723]: I1212 18:46:10.231860 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-host-proc-sys-kernel\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232287 kubelet[2723]: I1212 18:46:10.231877 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-lib-modules\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232287 kubelet[2723]: I1212 18:46:10.231895 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9176076-fd63-49a0-8027-23c9f9a6e67a-cilium-config-path\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232287 kubelet[2723]: I1212 18:46:10.231928 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9176076-fd63-49a0-8027-23c9f9a6e67a-cilium-ipsec-secrets\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232558 kubelet[2723]: I1212 18:46:10.231948 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9176076-fd63-49a0-8027-23c9f9a6e67a-hubble-tls\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232558 kubelet[2723]: I1212 18:46:10.231966 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-xtables-lock\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.232558 kubelet[2723]: I1212 18:46:10.232001 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9176076-fd63-49a0-8027-23c9f9a6e67a-hostproc\") pod \"cilium-ljbp7\" (UID: \"b9176076-fd63-49a0-8027-23c9f9a6e67a\") " pod="kube-system/cilium-ljbp7" Dec 12 18:46:10.285851 kubelet[2723]: E1212 18:46:10.285802 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:10.388297 kubelet[2723]: E1212 18:46:10.388264 2723 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:46:10.447438 kubelet[2723]: E1212 18:46:10.445986 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:10.449912 containerd[1544]: time="2025-12-12T18:46:10.449668320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljbp7,Uid:b9176076-fd63-49a0-8027-23c9f9a6e67a,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:10.471509 containerd[1544]: time="2025-12-12T18:46:10.470460935Z" level=info msg="connecting to shim 123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335" address="unix:///run/containerd/s/cfad6f20fe5c0916bff879aec887dc901601eb17d9790179264e79deb27845d2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:10.496614 systemd[1]: Started cri-containerd-123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335.scope - libcontainer container 123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335. Dec 12 18:46:10.524890 containerd[1544]: time="2025-12-12T18:46:10.524850360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljbp7,Uid:b9176076-fd63-49a0-8027-23c9f9a6e67a,Namespace:kube-system,Attempt:0,} returns sandbox id \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\"" Dec 12 18:46:10.526047 kubelet[2723]: E1212 18:46:10.526011 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:10.530345 containerd[1544]: time="2025-12-12T18:46:10.530312248Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:46:10.536580 containerd[1544]: time="2025-12-12T18:46:10.536547946Z" level=info msg="Container c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.539998 containerd[1544]: time="2025-12-12T18:46:10.539962748Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b\"" Dec 12 18:46:10.540394 containerd[1544]: time="2025-12-12T18:46:10.540364667Z" level=info msg="StartContainer for \"c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b\"" Dec 12 18:46:10.541201 containerd[1544]: time="2025-12-12T18:46:10.541125456Z" level=info msg="connecting to shim c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b" address="unix:///run/containerd/s/cfad6f20fe5c0916bff879aec887dc901601eb17d9790179264e79deb27845d2" protocol=ttrpc version=3 Dec 12 18:46:10.563654 systemd[1]: Started cri-containerd-c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b.scope - libcontainer container c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b. Dec 12 18:46:10.581112 sshd[4428]: Accepted publickey for core from 139.178.68.195 port 36446 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:10.583201 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:10.588567 systemd-logind[1521]: New session 23 of user core. Dec 12 18:46:10.598806 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:46:10.608851 containerd[1544]: time="2025-12-12T18:46:10.608826953Z" level=info msg="StartContainer for \"c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b\" returns successfully" Dec 12 18:46:10.618228 systemd[1]: cri-containerd-c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b.scope: Deactivated successfully. Dec 12 18:46:10.623006 containerd[1544]: time="2025-12-12T18:46:10.622981053Z" level=info msg="received container exit event container_id:\"c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b\" id:\"c5eaadfd0b3dd0c6dfa05ec2a90ec842ef717a20610d951b264a817eeb5fc17b\" pid:4493 exited_at:{seconds:1765565170 nanos:622283374}" Dec 12 18:46:10.687701 kubelet[2723]: E1212 18:46:10.687633 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:10.695680 containerd[1544]: time="2025-12-12T18:46:10.695625169Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:46:10.715840 containerd[1544]: time="2025-12-12T18:46:10.715736057Z" level=info msg="Container 4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.723928 containerd[1544]: time="2025-12-12T18:46:10.723891979Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089\"" Dec 12 18:46:10.725075 containerd[1544]: time="2025-12-12T18:46:10.725048897Z" level=info msg="StartContainer for \"4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089\"" Dec 12 18:46:10.727931 containerd[1544]: time="2025-12-12T18:46:10.727898701Z" level=info msg="connecting to shim 4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089" address="unix:///run/containerd/s/cfad6f20fe5c0916bff879aec887dc901601eb17d9790179264e79deb27845d2" protocol=ttrpc version=3 Dec 12 18:46:10.754630 systemd[1]: Started cri-containerd-4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089.scope - libcontainer container 4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089. Dec 12 18:46:10.793402 containerd[1544]: time="2025-12-12T18:46:10.793362452Z" level=info msg="StartContainer for \"4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089\" returns successfully" Dec 12 18:46:10.801656 systemd[1]: cri-containerd-4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089.scope: Deactivated successfully. Dec 12 18:46:10.803336 containerd[1544]: time="2025-12-12T18:46:10.803279501Z" level=info msg="received container exit event container_id:\"4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089\" id:\"4837c11f25a7de6e0e3fa84d6a383ef389501bcadf60ca99c607dd89e1f0f089\" pid:4541 exited_at:{seconds:1765565170 nanos:803010792}" Dec 12 18:46:10.821689 sshd[4508]: Connection closed by 139.178.68.195 port 36446 Dec 12 18:46:10.822948 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:10.829950 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:46:10.830684 systemd[1]: sshd@22-172.238.172.51:22-139.178.68.195:36446.service: Deactivated successfully. Dec 12 18:46:10.834051 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:46:10.836930 systemd-logind[1521]: Removed session 23. Dec 12 18:46:10.889058 systemd[1]: Started sshd@23-172.238.172.51:22-139.178.68.195:36456.service - OpenSSH per-connection server daemon (139.178.68.195:36456). Dec 12 18:46:11.237162 sshd[4581]: Accepted publickey for core from 139.178.68.195 port 36456 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:11.239271 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:11.244779 systemd-logind[1521]: New session 24 of user core. Dec 12 18:46:11.256648 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:46:11.690759 kubelet[2723]: E1212 18:46:11.690550 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:11.695512 containerd[1544]: time="2025-12-12T18:46:11.694803730Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:46:11.710303 containerd[1544]: time="2025-12-12T18:46:11.708225662Z" level=info msg="Container 45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:11.719843 containerd[1544]: time="2025-12-12T18:46:11.719712268Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393\"" Dec 12 18:46:11.721115 containerd[1544]: time="2025-12-12T18:46:11.721052535Z" level=info msg="StartContainer for \"45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393\"" Dec 12 18:46:11.723397 containerd[1544]: time="2025-12-12T18:46:11.723355080Z" level=info msg="connecting to shim 45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393" address="unix:///run/containerd/s/cfad6f20fe5c0916bff879aec887dc901601eb17d9790179264e79deb27845d2" protocol=ttrpc version=3 Dec 12 18:46:11.744790 systemd[1]: Started cri-containerd-45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393.scope - libcontainer container 45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393. Dec 12 18:46:11.853802 containerd[1544]: time="2025-12-12T18:46:11.853719689Z" level=info msg="StartContainer for \"45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393\" returns successfully" Dec 12 18:46:11.857355 systemd[1]: cri-containerd-45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393.scope: Deactivated successfully. Dec 12 18:46:11.858874 containerd[1544]: time="2025-12-12T18:46:11.858853219Z" level=info msg="received container exit event container_id:\"45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393\" id:\"45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393\" pid:4604 exited_at:{seconds:1765565171 nanos:858589339}" Dec 12 18:46:11.884121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45a137ed6e59836a0cf2d6e0c10a07ee4114b0f6eac719b5ee3aa976eac28393-rootfs.mount: Deactivated successfully. Dec 12 18:46:12.697694 kubelet[2723]: E1212 18:46:12.697646 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:12.703949 containerd[1544]: time="2025-12-12T18:46:12.703896254Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:46:12.718456 containerd[1544]: time="2025-12-12T18:46:12.717652946Z" level=info msg="Container ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:12.728097 containerd[1544]: time="2025-12-12T18:46:12.728062495Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0\"" Dec 12 18:46:12.729171 containerd[1544]: time="2025-12-12T18:46:12.729155033Z" level=info msg="StartContainer for \"ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0\"" Dec 12 18:46:12.730639 containerd[1544]: time="2025-12-12T18:46:12.730605759Z" level=info msg="connecting to shim ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0" address="unix:///run/containerd/s/cfad6f20fe5c0916bff879aec887dc901601eb17d9790179264e79deb27845d2" protocol=ttrpc version=3 Dec 12 18:46:12.772628 systemd[1]: Started cri-containerd-ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0.scope - libcontainer container ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0. Dec 12 18:46:12.815010 systemd[1]: cri-containerd-ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0.scope: Deactivated successfully. Dec 12 18:46:12.817265 containerd[1544]: time="2025-12-12T18:46:12.817082133Z" level=info msg="received container exit event container_id:\"ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0\" id:\"ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0\" pid:4645 exited_at:{seconds:1765565172 nanos:816873142}" Dec 12 18:46:12.817265 containerd[1544]: time="2025-12-12T18:46:12.817198892Z" level=info msg="StartContainer for \"ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0\" returns successfully" Dec 12 18:46:12.841204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebf2ed41dd8860507e7370baf009b29e921c00f46e996a504c7ce66c383b97e0-rootfs.mount: Deactivated successfully. Dec 12 18:46:13.353022 kubelet[2723]: I1212 18:46:13.352457 2723 setters.go:618] "Node became not ready" node="172-238-172-51" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T18:46:13Z","lastTransitionTime":"2025-12-12T18:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 18:46:13.702870 kubelet[2723]: E1212 18:46:13.702840 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:13.706660 containerd[1544]: time="2025-12-12T18:46:13.706615145Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:46:13.722508 containerd[1544]: time="2025-12-12T18:46:13.721742114Z" level=info msg="Container 9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:13.725804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867598831.mount: Deactivated successfully. Dec 12 18:46:13.729858 containerd[1544]: time="2025-12-12T18:46:13.729825108Z" level=info msg="CreateContainer within sandbox \"123126d096b415cbaa3742dc3cd3f72432ddddb5e00b03f0c29af096c5c75335\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa\"" Dec 12 18:46:13.730371 containerd[1544]: time="2025-12-12T18:46:13.730295527Z" level=info msg="StartContainer for \"9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa\"" Dec 12 18:46:13.731104 containerd[1544]: time="2025-12-12T18:46:13.731085175Z" level=info msg="connecting to shim 9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa" address="unix:///run/containerd/s/cfad6f20fe5c0916bff879aec887dc901601eb17d9790179264e79deb27845d2" protocol=ttrpc version=3 Dec 12 18:46:13.754596 systemd[1]: Started cri-containerd-9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa.scope - libcontainer container 9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa. Dec 12 18:46:13.806955 containerd[1544]: time="2025-12-12T18:46:13.806862693Z" level=info msg="StartContainer for \"9f268a1e0cad9138d72e8a93ebf0bcd05985887aa3831bcc9c7ec1dbc18a5ffa\" returns successfully" Dec 12 18:46:14.264517 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 12 18:46:14.710097 kubelet[2723]: E1212 18:46:14.710060 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:14.730428 kubelet[2723]: I1212 18:46:14.730352 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ljbp7" podStartSLOduration=4.730335295 podStartE2EDuration="4.730335295s" podCreationTimestamp="2025-12-12 18:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:14.728950248 +0000 UTC m=+164.547484847" watchObservedRunningTime="2025-12-12 18:46:14.730335295 +0000 UTC m=+164.548869904" Dec 12 18:46:16.447584 kubelet[2723]: E1212 18:46:16.447362 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:17.270465 systemd-networkd[1450]: lxc_health: Link UP Dec 12 18:46:17.273222 systemd-networkd[1450]: lxc_health: Gained carrier Dec 12 18:46:17.285972 kubelet[2723]: E1212 18:46:17.285342 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:18.448735 kubelet[2723]: E1212 18:46:18.448675 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:18.723227 kubelet[2723]: E1212 18:46:18.722240 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:19.071151 systemd-networkd[1450]: lxc_health: Gained IPv6LL Dec 12 18:46:19.724902 kubelet[2723]: E1212 18:46:19.724689 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:19.971672 kubelet[2723]: E1212 18:46:19.971608 2723 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41512->127.0.0.1:44721: write tcp 127.0.0.1:41512->127.0.0.1:44721: write: broken pipe Dec 12 18:46:21.285846 kubelet[2723]: E1212 18:46:21.285735 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:21.285846 kubelet[2723]: E1212 18:46:21.285735 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:24.220912 kubelet[2723]: E1212 18:46:24.220690 2723 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41374->127.0.0.1:44721: write tcp 127.0.0.1:41374->127.0.0.1:44721: write: broken pipe Dec 12 18:46:25.284819 kubelet[2723]: E1212 18:46:25.284785 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:46:26.378815 sshd[4584]: Connection closed by 139.178.68.195 port 36456 Dec 12 18:46:26.379586 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:26.384335 systemd[1]: sshd@23-172.238.172.51:22-139.178.68.195:36456.service: Deactivated successfully. Dec 12 18:46:26.387653 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:46:26.388965 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:46:26.390814 systemd-logind[1521]: Removed session 24. Dec 12 18:46:27.666568 update_engine[1522]: I20251212 18:46:27.666309 1522 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 18:46:27.666568 update_engine[1522]: I20251212 18:46:27.666574 1522 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 18:46:27.667064 update_engine[1522]: I20251212 18:46:27.666756 1522 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 18:46:27.667321 update_engine[1522]: I20251212 18:46:27.667289 1522 omaha_request_params.cc:62] Current group set to stable Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667397 1522 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667411 1522 update_attempter.cc:643] Scheduling an action processor start. Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667428 1522 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667459 1522 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667536 1522 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667546 1522 omaha_request_action.cc:272] Request: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: Dec 12 18:46:27.667611 update_engine[1522]: I20251212 18:46:27.667554 1522 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:46:27.668404 locksmithd[1562]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 18:46:27.668704 update_engine[1522]: I20251212 18:46:27.668417 1522 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:46:27.669190 update_engine[1522]: I20251212 18:46:27.669162 1522 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:46:27.723289 update_engine[1522]: E20251212 18:46:27.723196 1522 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 18:46:27.723632 update_engine[1522]: I20251212 18:46:27.723349 1522 libcurl_http_fetcher.cc:283] No HTTP response, retry 1