Dec 16 13:10:47.961074 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:10:47.961098 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:10:47.961106 kernel: BIOS-provided physical RAM map: Dec 16 13:10:47.961113 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 16 13:10:47.961119 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 16 13:10:47.961125 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 13:10:47.961135 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 16 13:10:47.961142 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 16 13:10:47.961148 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 13:10:47.961154 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 13:10:47.961160 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:10:47.961166 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 13:10:47.961172 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 16 13:10:47.961178 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:10:47.961189 kernel: NX (Execute Disable) protection: active Dec 16 13:10:47.961195 kernel: APIC: Static calls initialized Dec 16 13:10:47.961202 kernel: SMBIOS 2.8 present. Dec 16 13:10:47.961209 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 16 13:10:47.961215 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:10:47.961222 kernel: Hypervisor detected: KVM Dec 16 13:10:47.961230 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 13:10:47.961237 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:10:47.961243 kernel: kvm-clock: using sched offset of 7184362195 cycles Dec 16 13:10:47.961250 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:10:47.961257 kernel: tsc: Detected 1999.996 MHz processor Dec 16 13:10:47.961264 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:10:47.961271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:10:47.961278 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 16 13:10:47.961285 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 13:10:47.961292 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:10:47.961301 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 13:10:47.961308 kernel: Using GB pages for direct mapping Dec 16 13:10:47.961315 kernel: ACPI: Early table checksum verification disabled Dec 16 13:10:47.961321 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 16 13:10:47.961328 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961335 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961342 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961348 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 13:10:47.961355 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961364 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961374 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961381 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:10:47.961388 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 16 13:10:47.961395 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 16 13:10:47.961405 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 13:10:47.961412 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 16 13:10:47.961419 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 16 13:10:47.961426 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 16 13:10:47.961433 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 16 13:10:47.961440 kernel: No NUMA configuration found Dec 16 13:10:47.961447 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 16 13:10:47.961454 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Dec 16 13:10:47.961461 kernel: Zone ranges: Dec 16 13:10:47.961470 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:10:47.961477 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:10:47.961484 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 16 13:10:47.961491 kernel: Device empty Dec 16 13:10:47.961498 kernel: Movable zone start for each node Dec 16 13:10:47.961505 kernel: Early memory node ranges Dec 16 13:10:47.961512 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 13:10:47.961519 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 16 13:10:47.961526 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 16 13:10:47.961533 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 16 13:10:47.961542 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:10:47.961549 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:10:47.961556 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 16 13:10:47.961564 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:10:47.961571 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:10:47.961578 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:10:47.961585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:10:47.961592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:10:47.961599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:10:47.961608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:10:47.961615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:10:47.961622 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:10:47.961629 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:10:47.961636 kernel: TSC deadline timer available Dec 16 13:10:47.961643 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:10:47.961650 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:10:47.961683 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:10:47.961715 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:10:47.961725 kernel: CPU topo: Num. cores per package: 2 Dec 16 13:10:47.961732 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:10:47.961739 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:10:47.961746 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:10:47.961753 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:10:47.961760 kernel: kvm-guest: setup PV sched yield Dec 16 13:10:47.961767 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 13:10:47.961774 kernel: Booting paravirtualized kernel on KVM Dec 16 13:10:47.961781 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:10:47.961790 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:10:47.961798 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:10:47.961805 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:10:47.961811 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:10:47.961818 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:10:47.961825 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:10:47.961833 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:10:47.961841 kernel: random: crng init done Dec 16 13:10:47.961850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:10:47.961857 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:10:47.961864 kernel: Fallback order for Node 0: 0 Dec 16 13:10:47.961871 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 16 13:10:47.961878 kernel: Policy zone: Normal Dec 16 13:10:47.961885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:10:47.961893 kernel: software IO TLB: area num 2. Dec 16 13:10:47.961900 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:10:47.961907 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:10:47.961916 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:10:47.961923 kernel: Dynamic Preempt: voluntary Dec 16 13:10:47.961930 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:10:47.961944 kernel: rcu: RCU event tracing is enabled. Dec 16 13:10:47.961951 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:10:47.961958 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:10:47.961965 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:10:47.961972 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:10:47.961979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:10:47.961998 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:10:47.962008 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:10:47.962022 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:10:47.962031 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:10:47.962039 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:10:47.962046 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:10:47.962053 kernel: Console: colour VGA+ 80x25 Dec 16 13:10:47.962061 kernel: printk: legacy console [tty0] enabled Dec 16 13:10:47.962068 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:10:47.962076 kernel: ACPI: Core revision 20240827 Dec 16 13:10:47.962085 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:10:47.962093 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:10:47.962100 kernel: x2apic enabled Dec 16 13:10:47.962108 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:10:47.962116 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:10:47.962123 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:10:47.962131 kernel: kvm-guest: setup PV IPIs Dec 16 13:10:47.962140 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:10:47.962148 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Dec 16 13:10:47.962155 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Dec 16 13:10:47.962163 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:10:47.962170 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:10:47.962177 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:10:47.962185 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:10:47.962192 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:10:47.962199 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:10:47.962208 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 13:10:47.962215 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:10:47.962222 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:10:47.962229 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:10:47.962237 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:10:47.962244 kernel: active return thunk: srso_alias_return_thunk Dec 16 13:10:47.962251 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:10:47.962259 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 16 13:10:47.962284 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:10:47.962291 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:10:47.962298 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:10:47.962305 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:10:47.962312 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:10:47.962319 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:10:47.962326 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 16 13:10:47.962334 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 16 13:10:47.962341 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:10:47.962350 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:10:47.962357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:10:47.962364 kernel: landlock: Up and running. Dec 16 13:10:47.962371 kernel: SELinux: Initializing. Dec 16 13:10:47.962378 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:10:47.962385 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:10:47.962392 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 16 13:10:47.962400 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:10:47.962407 kernel: ... version: 0 Dec 16 13:10:47.962416 kernel: ... bit width: 48 Dec 16 13:10:47.962423 kernel: ... generic registers: 6 Dec 16 13:10:47.962430 kernel: ... value mask: 0000ffffffffffff Dec 16 13:10:47.962437 kernel: ... max period: 00007fffffffffff Dec 16 13:10:47.962444 kernel: ... fixed-purpose events: 0 Dec 16 13:10:47.962451 kernel: ... event mask: 000000000000003f Dec 16 13:10:47.962458 kernel: signal: max sigframe size: 3376 Dec 16 13:10:47.962465 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:10:47.962472 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:10:47.962481 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:10:47.962488 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:10:47.962495 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:10:47.962519 kernel: .... node #0, CPUs: #1 Dec 16 13:10:47.962528 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:10:47.962535 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Dec 16 13:10:47.962542 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235480K reserved, 0K cma-reserved) Dec 16 13:10:47.962550 kernel: devtmpfs: initialized Dec 16 13:10:47.962557 kernel: x86/mm: Memory block size: 128MB Dec 16 13:10:47.962566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:10:47.962573 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:10:47.962580 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:10:47.962588 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:10:47.962594 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:10:47.962602 kernel: audit: type=2000 audit(1765890644.845:1): state=initialized audit_enabled=0 res=1 Dec 16 13:10:47.962609 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:10:47.962616 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:10:47.962623 kernel: cpuidle: using governor menu Dec 16 13:10:47.962632 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:10:47.962639 kernel: dca service started, version 1.12.1 Dec 16 13:10:47.962646 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 13:10:47.962653 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 13:10:47.962660 kernel: PCI: Using configuration type 1 for base access Dec 16 13:10:47.962667 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:10:47.962674 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:10:47.962682 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:10:47.962689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:10:47.962698 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:10:47.962705 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:10:47.962712 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:10:47.962719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:10:47.962726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:10:47.962733 kernel: ACPI: Interpreter enabled Dec 16 13:10:47.962740 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:10:47.962747 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:10:47.962754 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:10:47.962763 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:10:47.962771 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:10:47.962778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:10:47.962972 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:10:47.963150 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:10:47.963278 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:10:47.963288 kernel: PCI host bridge to bus 0000:00 Dec 16 13:10:47.963420 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:10:47.963534 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:10:47.963645 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:10:47.963754 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 13:10:47.963863 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 13:10:47.963971 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 16 13:10:47.964100 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:10:47.964250 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:10:47.964389 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:10:47.964512 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 16 13:10:47.964756 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 16 13:10:47.964875 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 16 13:10:47.965029 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:10:47.965167 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:10:47.965294 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 16 13:10:47.965414 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 16 13:10:47.965532 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 16 13:10:47.965660 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:10:47.965780 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 13:10:47.965899 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 16 13:10:47.966056 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 16 13:10:47.966180 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 16 13:10:47.966764 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:10:47.966897 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:10:47.967071 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:10:47.967198 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 16 13:10:47.967319 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 16 13:10:47.967453 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:10:47.967573 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 13:10:47.967583 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:10:47.967590 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:10:47.967598 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:10:47.967605 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:10:47.967612 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:10:47.967619 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:10:47.967629 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:10:47.967636 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:10:47.967659 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:10:47.967666 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:10:47.967673 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:10:47.967680 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:10:47.967687 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:10:47.967694 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:10:47.967701 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:10:47.967710 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:10:47.967717 kernel: iommu: Default domain type: Translated Dec 16 13:10:47.967724 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:10:47.967731 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:10:47.967739 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:10:47.967746 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 16 13:10:47.967753 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 16 13:10:47.967875 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:10:47.968023 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:10:47.968148 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:10:47.968158 kernel: vgaarb: loaded Dec 16 13:10:47.968165 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:10:47.968172 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:10:47.968180 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:10:47.968187 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:10:47.968194 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:10:47.968201 kernel: pnp: PnP ACPI init Dec 16 13:10:47.968336 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 13:10:47.968347 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:10:47.968354 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:10:47.968361 kernel: NET: Registered PF_INET protocol family Dec 16 13:10:47.968368 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:10:47.968375 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:10:47.968383 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:10:47.968390 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:10:47.968400 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:10:47.968407 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:10:47.968414 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:10:47.968421 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:10:47.968428 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:10:47.968435 kernel: NET: Registered PF_XDP protocol family Dec 16 13:10:47.968547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:10:47.968657 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:10:47.968767 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:10:47.968881 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 13:10:47.969028 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 13:10:47.970443 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 16 13:10:47.970458 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:10:47.970467 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:10:47.970475 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 16 13:10:47.970482 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Dec 16 13:10:47.970490 kernel: Initialise system trusted keyrings Dec 16 13:10:47.970501 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:10:47.970508 kernel: Key type asymmetric registered Dec 16 13:10:47.970515 kernel: Asymmetric key parser 'x509' registered Dec 16 13:10:47.970523 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:10:47.970530 kernel: io scheduler mq-deadline registered Dec 16 13:10:47.970537 kernel: io scheduler kyber registered Dec 16 13:10:47.970544 kernel: io scheduler bfq registered Dec 16 13:10:47.970551 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:10:47.970559 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:10:47.970569 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:10:47.970577 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:10:47.970584 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:10:47.970591 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:10:47.970598 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:10:47.970606 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:10:47.970741 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 13:10:47.970754 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:10:47.970870 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 13:10:47.971007 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T13:10:47 UTC (1765890647) Dec 16 13:10:47.976911 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 13:10:47.976924 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:10:47.976932 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:10:47.976940 kernel: Segment Routing with IPv6 Dec 16 13:10:47.976948 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:10:47.976956 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:10:47.976964 kernel: Key type dns_resolver registered Dec 16 13:10:47.976976 kernel: IPI shorthand broadcast: enabled Dec 16 13:10:47.977031 kernel: sched_clock: Marking stable (2838005557, 352253786)->(3286312641, -96053298) Dec 16 13:10:47.977041 kernel: registered taskstats version 1 Dec 16 13:10:47.977049 kernel: Loading compiled-in X.509 certificates Dec 16 13:10:47.977057 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:10:47.977065 kernel: Demotion targets for Node 0: null Dec 16 13:10:47.977073 kernel: Key type .fscrypt registered Dec 16 13:10:47.977081 kernel: Key type fscrypt-provisioning registered Dec 16 13:10:47.977089 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:10:47.977100 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:10:47.977108 kernel: ima: No architecture policies found Dec 16 13:10:47.977116 kernel: clk: Disabling unused clocks Dec 16 13:10:47.977124 kernel: Warning: unable to open an initial console. Dec 16 13:10:47.977132 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:10:47.977146 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:10:47.977154 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:10:47.977162 kernel: Run /init as init process Dec 16 13:10:47.977170 kernel: with arguments: Dec 16 13:10:47.977181 kernel: /init Dec 16 13:10:47.977189 kernel: with environment: Dec 16 13:10:47.977211 kernel: HOME=/ Dec 16 13:10:47.977222 kernel: TERM=linux Dec 16 13:10:47.977231 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:10:47.977243 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:10:47.977252 systemd[1]: Detected virtualization kvm. Dec 16 13:10:47.977262 systemd[1]: Detected architecture x86-64. Dec 16 13:10:47.977270 systemd[1]: Running in initrd. Dec 16 13:10:47.977279 systemd[1]: No hostname configured, using default hostname. Dec 16 13:10:47.977288 systemd[1]: Hostname set to . Dec 16 13:10:47.977296 systemd[1]: Initializing machine ID from random generator. Dec 16 13:10:47.977305 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:10:47.977313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:10:47.977322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:10:47.977339 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:10:47.977348 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:10:47.977357 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:10:47.977366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:10:47.977564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:10:47.977573 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:10:47.977582 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:10:47.977593 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:10:47.977601 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:10:47.977610 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:10:47.977619 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:10:47.977627 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:10:47.977636 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:10:47.977645 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:10:47.977653 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:10:47.977662 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:10:47.977673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:10:47.977682 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:10:47.977693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:10:47.977704 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:10:47.977712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:10:47.977724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:10:47.977733 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:10:47.977742 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:10:47.977750 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:10:47.977759 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:10:47.977768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:10:47.977776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:47.977785 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:10:47.977796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:10:47.977811 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:10:47.977845 systemd-journald[187]: Collecting audit messages is disabled. Dec 16 13:10:47.977869 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:10:47.977878 systemd-journald[187]: Journal started Dec 16 13:10:47.977896 systemd-journald[187]: Runtime Journal (/run/log/journal/f52c989a60fb4b3d9aaf100b78616cea) is 8M, max 78.2M, 70.2M free. Dec 16 13:10:47.943511 systemd-modules-load[188]: Inserted module 'overlay' Dec 16 13:10:48.014189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:10:48.014205 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:10:48.014848 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 16 13:10:48.101933 kernel: Bridge firewalling registered Dec 16 13:10:48.103353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:10:48.104610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:48.106445 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:10:48.112906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:10:48.117217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:10:48.123728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:10:48.130544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:10:48.144519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:10:48.150904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:10:48.155114 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:10:48.157127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:10:48.159281 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:10:48.167509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:10:48.172219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:10:48.185797 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:10:48.220824 systemd-resolved[228]: Positive Trust Anchors: Dec 16 13:10:48.220837 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:10:48.220864 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:10:48.228212 systemd-resolved[228]: Defaulting to hostname 'linux'. Dec 16 13:10:48.229751 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:10:48.231018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:10:48.297054 kernel: SCSI subsystem initialized Dec 16 13:10:48.307071 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:10:48.319039 kernel: iscsi: registered transport (tcp) Dec 16 13:10:48.342726 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:10:48.342776 kernel: QLogic iSCSI HBA Driver Dec 16 13:10:48.367096 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:10:48.385057 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:10:48.389165 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:10:48.462071 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:10:48.465190 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:10:48.531044 kernel: raid6: avx2x4 gen() 30335 MB/s Dec 16 13:10:48.549025 kernel: raid6: avx2x2 gen() 29052 MB/s Dec 16 13:10:48.567093 kernel: raid6: avx2x1 gen() 19611 MB/s Dec 16 13:10:48.567115 kernel: raid6: using algorithm avx2x4 gen() 30335 MB/s Dec 16 13:10:48.588435 kernel: raid6: .... xor() 4639 MB/s, rmw enabled Dec 16 13:10:48.588458 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:10:48.612027 kernel: xor: automatically using best checksumming function avx Dec 16 13:10:48.760020 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:10:48.767953 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:10:48.770429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:10:48.795309 systemd-udevd[435]: Using default interface naming scheme 'v255'. Dec 16 13:10:48.801577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:10:48.806088 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:10:48.837965 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Dec 16 13:10:48.866681 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:10:48.869065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:10:48.959672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:10:48.963114 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:10:49.030449 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 16 13:10:49.044010 kernel: libata version 3.00 loaded. Dec 16 13:10:49.048011 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:10:49.054011 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 13:10:49.058717 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:10:49.058912 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:10:49.058930 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:10:49.069111 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:10:49.069284 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:10:49.069622 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:10:49.074687 kernel: scsi host1: ahci Dec 16 13:10:49.074870 kernel: scsi host2: ahci Dec 16 13:10:49.083016 kernel: scsi host3: ahci Dec 16 13:10:49.094030 kernel: AES CTR mode by8 optimization enabled Dec 16 13:10:49.098029 kernel: scsi host4: ahci Dec 16 13:10:49.221026 kernel: scsi host5: ahci Dec 16 13:10:49.230016 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:10:49.230576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:10:49.231903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:49.323534 kernel: scsi host6: ahci Dec 16 13:10:49.323855 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Dec 16 13:10:49.323877 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Dec 16 13:10:49.323895 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Dec 16 13:10:49.323912 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Dec 16 13:10:49.323929 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Dec 16 13:10:49.323947 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Dec 16 13:10:49.323965 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 13:10:49.324271 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 16 13:10:49.324486 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 13:10:49.324694 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 13:10:49.324902 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:10:49.325143 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:10:49.325156 kernel: GPT:9289727 != 167739391 Dec 16 13:10:49.325166 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:10:49.325181 kernel: GPT:9289727 != 167739391 Dec 16 13:10:49.325191 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:10:49.325201 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:10:49.325211 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 13:10:49.322457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:49.326205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:49.328597 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:10:49.459217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:49.547348 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:10:49.547379 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:10:49.547391 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 13:10:49.548006 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:10:49.550023 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:10:49.553028 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:10:49.608740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 13:10:49.629423 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 13:10:49.639316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:10:49.640412 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:10:49.650461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 13:10:49.651293 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 13:10:49.654303 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:10:49.655335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:10:49.657476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:10:49.661109 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:10:49.665150 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:10:49.686873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:10:49.687087 disk-uuid[615]: Primary Header is updated. Dec 16 13:10:49.687087 disk-uuid[615]: Secondary Entries is updated. Dec 16 13:10:49.687087 disk-uuid[615]: Secondary Header is updated. Dec 16 13:10:49.693368 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:10:50.713037 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:10:50.713676 disk-uuid[621]: The operation has completed successfully. Dec 16 13:10:50.767721 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:10:50.767865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:10:50.789962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:10:50.810164 sh[637]: Success Dec 16 13:10:50.831951 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:10:50.831979 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:10:50.836021 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:10:50.847067 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:10:50.890521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:10:50.894893 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:10:50.913322 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:10:50.926020 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (649) Dec 16 13:10:50.930623 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:10:50.930644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:50.942124 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:10:50.942190 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:10:50.946972 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:10:50.949677 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:10:50.950965 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:10:50.952273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:10:50.953327 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:10:50.957770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:10:50.986008 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (679) Dec 16 13:10:50.990175 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:50.994007 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:51.003827 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:10:51.003851 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:10:51.003863 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:10:51.012016 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:51.015713 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:10:51.017863 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:10:51.139344 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:10:51.143003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:10:51.144221 ignition[750]: Ignition 2.22.0 Dec 16 13:10:51.144229 ignition[750]: Stage: fetch-offline Dec 16 13:10:51.144259 ignition[750]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:51.144269 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:51.144361 ignition[750]: parsed url from cmdline: "" Dec 16 13:10:51.144366 ignition[750]: no config URL provided Dec 16 13:10:51.144371 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:10:51.144380 ignition[750]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:10:51.144386 ignition[750]: failed to fetch config: resource requires networking Dec 16 13:10:51.150651 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:10:51.145575 ignition[750]: Ignition finished successfully Dec 16 13:10:51.192142 systemd-networkd[823]: lo: Link UP Dec 16 13:10:51.192155 systemd-networkd[823]: lo: Gained carrier Dec 16 13:10:51.194653 systemd-networkd[823]: Enumeration completed Dec 16 13:10:51.195073 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:51.195078 systemd-networkd[823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:10:51.196754 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:10:51.198594 systemd-networkd[823]: eth0: Link UP Dec 16 13:10:51.198775 systemd-networkd[823]: eth0: Gained carrier Dec 16 13:10:51.198785 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:51.199806 systemd[1]: Reached target network.target - Network. Dec 16 13:10:51.202396 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:10:51.240137 ignition[828]: Ignition 2.22.0 Dec 16 13:10:51.240153 ignition[828]: Stage: fetch Dec 16 13:10:51.240281 ignition[828]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:51.240293 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:51.240370 ignition[828]: parsed url from cmdline: "" Dec 16 13:10:51.240375 ignition[828]: no config URL provided Dec 16 13:10:51.240380 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:10:51.240389 ignition[828]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:10:51.240415 ignition[828]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 16 13:10:51.240795 ignition[828]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:10:51.441747 ignition[828]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 16 13:10:51.441898 ignition[828]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:10:51.842096 ignition[828]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 16 13:10:51.842268 ignition[828]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:10:51.914065 systemd-networkd[823]: eth0: DHCPv4 address 172.239.197.230/24, gateway 172.239.197.1 acquired from 23.210.200.12 Dec 16 13:10:52.638293 systemd-networkd[823]: eth0: Gained IPv6LL Dec 16 13:10:52.643153 ignition[828]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 16 13:10:52.728082 ignition[828]: PUT result: OK Dec 16 13:10:52.728133 ignition[828]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 16 13:10:52.843486 ignition[828]: GET result: OK Dec 16 13:10:52.843590 ignition[828]: parsing config with SHA512: aa2b487fc0d887dfc3d66d891435bd5a98327f238f87ea719247b3d330549460ebada04ba654225d41bdab2a19fdde9709fd7b462b79e014e6e7ba7505ab6b72 Dec 16 13:10:52.848639 unknown[828]: fetched base config from "system" Dec 16 13:10:52.848650 unknown[828]: fetched base config from "system" Dec 16 13:10:52.848896 ignition[828]: fetch: fetch complete Dec 16 13:10:52.848657 unknown[828]: fetched user config from "akamai" Dec 16 13:10:52.848901 ignition[828]: fetch: fetch passed Dec 16 13:10:52.851796 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:10:52.848943 ignition[828]: Ignition finished successfully Dec 16 13:10:52.875103 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:10:52.917546 ignition[836]: Ignition 2.22.0 Dec 16 13:10:52.917561 ignition[836]: Stage: kargs Dec 16 13:10:52.917673 ignition[836]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:52.917684 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:52.920187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:10:52.918407 ignition[836]: kargs: kargs passed Dec 16 13:10:52.918444 ignition[836]: Ignition finished successfully Dec 16 13:10:52.923139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:10:52.952974 ignition[842]: Ignition 2.22.0 Dec 16 13:10:52.953004 ignition[842]: Stage: disks Dec 16 13:10:52.953120 ignition[842]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:52.953130 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:52.955280 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:10:52.953794 ignition[842]: disks: disks passed Dec 16 13:10:52.956492 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:10:52.953831 ignition[842]: Ignition finished successfully Dec 16 13:10:52.957759 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:10:52.959278 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:10:52.960569 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:10:52.962143 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:10:52.964365 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:10:52.992097 systemd-fsck[850]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:10:52.996057 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:10:52.999475 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:10:53.110012 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:10:53.110839 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:10:53.112133 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:10:53.114324 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:10:53.118049 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:10:53.119735 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:10:53.120727 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:10:53.120750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:10:53.125782 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:10:53.129097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:10:53.136021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (858) Dec 16 13:10:53.142917 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:53.142965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:53.149669 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:10:53.149691 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:10:53.149703 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:10:53.153529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:10:53.190876 initrd-setup-root[882]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:10:53.196538 initrd-setup-root[889]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:10:53.201031 initrd-setup-root[896]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:10:53.205189 initrd-setup-root[903]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:10:53.291453 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:10:53.293759 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:10:53.297122 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:10:53.313398 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:10:53.317042 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:53.329878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:10:53.341972 ignition[972]: INFO : Ignition 2.22.0 Dec 16 13:10:53.341972 ignition[972]: INFO : Stage: mount Dec 16 13:10:53.343580 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:53.343580 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:53.343580 ignition[972]: INFO : mount: mount passed Dec 16 13:10:53.343580 ignition[972]: INFO : Ignition finished successfully Dec 16 13:10:53.344905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:10:53.347750 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:10:54.112883 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:10:54.140679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (983) Dec 16 13:10:54.140717 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:10:54.144103 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:10:54.153525 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:10:54.153548 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:10:54.153565 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:10:54.157685 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:10:54.198674 ignition[999]: INFO : Ignition 2.22.0 Dec 16 13:10:54.198674 ignition[999]: INFO : Stage: files Dec 16 13:10:54.201057 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:54.201057 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:54.201057 ignition[999]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:10:54.201057 ignition[999]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:10:54.201057 ignition[999]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:10:54.207111 ignition[999]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:10:54.207111 ignition[999]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:10:54.207111 ignition[999]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:10:54.207111 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:10:54.207111 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:10:54.203781 unknown[999]: wrote ssh authorized keys file for user: core Dec 16 13:10:54.296535 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:10:54.473139 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:10:54.473139 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:10:54.473139 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:10:54.622649 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:10:54.666540 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:10:54.666540 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:10:54.669104 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:10:54.698306 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:10:54.698306 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:10:54.698306 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:54.698306 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:54.698306 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:54.698306 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:10:55.083354 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:10:55.292576 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:10:55.292576 ignition[999]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:10:55.295979 ignition[999]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:10:55.297703 ignition[999]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:10:55.297703 ignition[999]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:10:55.302499 ignition[999]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:10:55.302499 ignition[999]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:10:55.302499 ignition[999]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:10:55.302499 ignition[999]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:10:55.302499 ignition[999]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:10:55.302499 ignition[999]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:10:55.302499 ignition[999]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:10:55.302499 ignition[999]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:10:55.302499 ignition[999]: INFO : files: files passed Dec 16 13:10:55.302499 ignition[999]: INFO : Ignition finished successfully Dec 16 13:10:55.303877 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:10:55.308779 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:10:55.318238 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:10:55.328169 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:10:55.328291 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:10:55.338428 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:10:55.338428 initrd-setup-root-after-ignition[1029]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:10:55.342076 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:10:55.341414 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:10:55.342570 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:10:55.344940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:10:55.387293 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:10:55.387439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:10:55.389630 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:10:55.390549 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:10:55.392303 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:10:55.393064 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:10:55.422372 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:10:55.425342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:10:55.445630 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:10:55.447335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:10:55.448229 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:10:55.450059 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:10:55.450161 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:10:55.452151 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:10:55.457729 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:10:55.459460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:10:55.460966 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:10:55.462464 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:10:55.464092 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:10:55.465791 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:10:55.467421 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:10:55.469110 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:10:55.470696 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:10:55.472358 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:10:55.473898 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:10:55.474067 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:10:55.476192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:10:55.477339 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:10:55.478749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:10:55.479274 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:10:55.480475 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:10:55.480606 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:10:55.482741 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:10:55.482852 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:10:55.483923 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:10:55.484082 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:10:55.487056 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:10:55.490162 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:10:55.491954 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:10:55.492185 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:10:55.523821 ignition[1053]: INFO : Ignition 2.22.0 Dec 16 13:10:55.523821 ignition[1053]: INFO : Stage: umount Dec 16 13:10:55.523821 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:10:55.523821 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:10:55.523821 ignition[1053]: INFO : umount: umount passed Dec 16 13:10:55.523821 ignition[1053]: INFO : Ignition finished successfully Dec 16 13:10:55.493539 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:10:55.493637 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:10:55.502716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:10:55.502824 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:10:55.527709 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:10:55.527822 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:10:55.530973 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:10:55.531056 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:10:55.533500 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:10:55.533551 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:10:55.539074 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:10:55.539123 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:10:55.540568 systemd[1]: Stopped target network.target - Network. Dec 16 13:10:55.542126 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:10:55.542180 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:10:55.544529 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:10:55.545192 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:10:55.547741 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:10:55.549281 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:10:55.550126 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:10:55.550878 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:10:55.550932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:10:55.553237 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:10:55.553281 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:10:55.554622 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:10:55.554681 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:10:55.556229 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:10:55.556279 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:10:55.557806 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:10:55.559286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:10:55.563828 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:10:55.565042 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:10:55.565158 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:10:55.571802 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:10:55.572091 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:10:55.572230 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:10:55.574935 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:10:55.575217 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:10:55.575329 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:10:55.578860 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:10:55.580178 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:10:55.580224 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:10:55.581736 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:10:55.581790 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:10:55.583955 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:10:55.585485 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:10:55.585577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:10:55.586376 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:10:55.586427 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:10:55.590150 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:10:55.590202 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:10:55.593710 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:10:55.593762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:10:55.595734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:10:55.598764 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:10:55.598829 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:10:55.615547 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:10:55.615738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:10:55.617480 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:10:55.617584 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:10:55.619689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:10:55.619765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:10:55.621456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:10:55.621493 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:10:55.623084 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:10:55.623138 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:10:55.625491 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:10:55.625539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:10:55.627168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:10:55.627222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:10:55.630084 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:10:55.631444 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:10:55.631499 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:10:55.635132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:10:55.635183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:10:55.636910 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:10:55.636959 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:10:55.640574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:10:55.640622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:10:55.641733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:10:55.641782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:55.646020 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:10:55.646079 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:10:55.646124 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:10:55.646168 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:10:55.648495 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:10:55.648590 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:10:55.650284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:10:55.652316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:10:55.686350 systemd[1]: Switching root. Dec 16 13:10:55.724403 systemd-journald[187]: Journal stopped Dec 16 13:10:56.963737 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 16 13:10:56.963775 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:10:56.963789 kernel: SELinux: policy capability open_perms=1 Dec 16 13:10:56.963799 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:10:56.963810 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:10:56.963822 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:10:56.963832 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:10:56.963843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:10:56.963852 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:10:56.963862 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:10:56.963872 kernel: audit: type=1403 audit(1765890655.910:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:10:56.963882 systemd[1]: Successfully loaded SELinux policy in 69.284ms. Dec 16 13:10:56.963896 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.671ms. Dec 16 13:10:56.963909 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:10:56.963921 systemd[1]: Detected virtualization kvm. Dec 16 13:10:56.963932 systemd[1]: Detected architecture x86-64. Dec 16 13:10:56.963944 systemd[1]: Detected first boot. Dec 16 13:10:56.963955 systemd[1]: Initializing machine ID from random generator. Dec 16 13:10:56.963966 zram_generator::config[1098]: No configuration found. Dec 16 13:10:56.963978 kernel: Guest personality initialized and is inactive Dec 16 13:10:56.966948 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:10:56.966971 kernel: Initialized host personality Dec 16 13:10:56.966983 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:10:56.967010 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:10:56.967028 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:10:56.967040 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:10:56.967051 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:10:56.967062 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:10:56.967073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:10:56.967084 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:10:56.967095 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:10:56.967109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:10:56.967119 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:10:56.967130 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:10:56.967141 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:10:56.967151 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:10:56.967162 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:10:56.967173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:10:56.967183 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:10:56.967197 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:10:56.967211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:10:56.967223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:10:56.967234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:10:56.967246 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:10:56.967257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:10:56.967267 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:10:56.967282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:10:56.967295 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:10:56.967306 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:10:56.967318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:10:56.967329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:10:56.967341 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:10:56.967352 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:10:56.967363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:10:56.967374 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:10:56.967388 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:10:56.967400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:10:56.967411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:10:56.967424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:10:56.967438 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:10:56.967449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:10:56.967461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:10:56.967472 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:10:56.967484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:56.967496 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:10:56.967507 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:10:56.967519 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:10:56.967532 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:10:56.967544 systemd[1]: Reached target machines.target - Containers. Dec 16 13:10:56.967555 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:10:56.967566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:10:56.967577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:10:56.967588 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:10:56.967599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:10:56.967610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:10:56.967621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:10:56.967634 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:10:56.967646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:10:56.967657 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:10:56.967669 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:10:56.967680 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:10:56.967692 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:10:56.967702 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:10:56.967729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:10:56.967762 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:10:56.967774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:10:56.967786 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:10:56.967807 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:10:56.967829 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:10:56.967840 kernel: fuse: init (API version 7.41) Dec 16 13:10:56.967851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:10:56.967863 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:10:56.967877 systemd[1]: Stopped verity-setup.service. Dec 16 13:10:56.967888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:56.967900 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:10:56.967938 systemd-journald[1186]: Collecting audit messages is disabled. Dec 16 13:10:56.967962 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:10:56.967974 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:10:56.967986 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:10:56.969844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:10:56.969859 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:10:56.969871 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:10:56.969882 kernel: loop: module loaded Dec 16 13:10:56.969894 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:10:56.969906 systemd-journald[1186]: Journal started Dec 16 13:10:56.969931 systemd-journald[1186]: Runtime Journal (/run/log/journal/273f2ab358564645906c6c0d88825ec5) is 8M, max 78.2M, 70.2M free. Dec 16 13:10:56.544966 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:10:56.558089 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:10:56.558707 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:10:56.975920 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:10:56.975867 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:10:56.977262 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:10:56.979438 kernel: ACPI: bus type drm_connector registered Dec 16 13:10:56.979885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:10:56.980173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:10:56.981324 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:10:56.981524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:10:56.982688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:10:56.982952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:10:56.984264 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:10:56.984525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:10:56.985592 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:10:56.985849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:10:56.987053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:10:56.988230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:10:56.989361 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:10:56.990604 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:10:57.004691 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:10:57.008096 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:10:57.011088 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:10:57.011845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:10:57.011872 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:10:57.013583 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:10:57.022206 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:10:57.024155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:10:57.028146 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:10:57.033483 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:10:57.036185 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:10:57.037431 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:10:57.038218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:10:57.042145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:10:57.048184 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:10:57.051863 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:10:57.055828 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:10:57.057846 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:10:57.096137 systemd-journald[1186]: Time spent on flushing to /var/log/journal/273f2ab358564645906c6c0d88825ec5 is 103.206ms for 1013 entries. Dec 16 13:10:57.096137 systemd-journald[1186]: System Journal (/var/log/journal/273f2ab358564645906c6c0d88825ec5) is 8M, max 195.6M, 187.6M free. Dec 16 13:10:57.205010 systemd-journald[1186]: Received client request to flush runtime journal. Dec 16 13:10:57.205221 kernel: loop0: detected capacity change from 0 to 8 Dec 16 13:10:57.205243 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:10:57.205538 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:10:57.095332 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:10:57.102216 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:10:57.106329 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:10:57.157473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:10:57.196241 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:10:57.197441 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 16 13:10:57.197453 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 16 13:10:57.200247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:10:57.211024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:10:57.213803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:10:57.226139 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:10:57.242793 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 13:10:57.284071 kernel: loop3: detected capacity change from 0 to 219144 Dec 16 13:10:57.310368 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:10:57.315204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:10:57.343031 kernel: loop4: detected capacity change from 0 to 8 Dec 16 13:10:57.348034 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:10:57.363477 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 16 13:10:57.364844 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 16 13:10:57.373023 kernel: loop6: detected capacity change from 0 to 110984 Dec 16 13:10:57.375103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:10:57.389015 kernel: loop7: detected capacity change from 0 to 219144 Dec 16 13:10:57.410557 (sd-merge)[1249]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 16 13:10:57.411604 (sd-merge)[1249]: Merged extensions into '/usr'. Dec 16 13:10:57.419536 systemd[1]: Reload requested from client PID 1223 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:10:57.419551 systemd[1]: Reloading... Dec 16 13:10:57.488041 zram_generator::config[1272]: No configuration found. Dec 16 13:10:57.624296 ldconfig[1218]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:10:57.734003 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:10:57.734585 systemd[1]: Reloading finished in 312 ms. Dec 16 13:10:57.767635 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:10:57.768815 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:10:57.769909 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:10:57.782682 systemd[1]: Starting ensure-sysext.service... Dec 16 13:10:57.786127 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:10:57.798109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:10:57.812281 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:10:57.812292 systemd[1]: Reloading... Dec 16 13:10:57.821800 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:10:57.824321 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:10:57.824633 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:10:57.824891 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:10:57.825799 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:10:57.826250 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Dec 16 13:10:57.826378 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Dec 16 13:10:57.834017 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:10:57.834035 systemd-tmpfiles[1321]: Skipping /boot Dec 16 13:10:57.845451 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Dec 16 13:10:57.852723 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:10:57.852741 systemd-tmpfiles[1321]: Skipping /boot Dec 16 13:10:57.892010 zram_generator::config[1348]: No configuration found. Dec 16 13:10:58.168312 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:10:58.168932 systemd[1]: Reloading finished in 356 ms. Dec 16 13:10:58.176023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:10:58.178698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:10:58.204008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:10:58.218059 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:10:58.224009 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:10:58.227219 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:10:58.229270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:58.232165 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:10:58.237070 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:10:58.237975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:10:58.240212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:10:58.244200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:10:58.249790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:10:58.250671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:10:58.250767 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:10:58.252890 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:10:58.259256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:10:58.267200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:10:58.274167 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:10:58.274930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:58.281230 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:10:58.285727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:58.285897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:10:58.286083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:10:58.286169 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:10:58.286250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:58.292063 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:58.292290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:10:58.297289 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:10:58.298200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:10:58.298294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:10:58.298407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:10:58.303672 systemd[1]: Finished ensure-sysext.service. Dec 16 13:10:58.305049 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:10:58.312911 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:10:58.347108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:10:58.351080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:10:58.364616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:10:58.366033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:10:58.368606 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:10:58.370282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:10:58.373696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:10:58.374175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:10:58.380716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:10:58.384060 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:10:58.386967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:10:58.394152 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:10:58.395894 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:10:58.397030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:10:58.411053 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:10:58.414573 augenrules[1483]: No rules Dec 16 13:10:58.415648 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:10:58.417470 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:10:58.418099 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:10:58.458038 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:10:58.459546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:10:58.461007 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:10:58.464674 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:10:58.499840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:10:58.530148 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:10:58.628739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:10:58.705806 systemd-networkd[1436]: lo: Link UP Dec 16 13:10:58.705820 systemd-networkd[1436]: lo: Gained carrier Dec 16 13:10:58.708069 systemd-networkd[1436]: Enumeration completed Dec 16 13:10:58.708173 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:10:58.708510 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:58.708517 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:10:58.709229 systemd-networkd[1436]: eth0: Link UP Dec 16 13:10:58.709723 systemd-networkd[1436]: eth0: Gained carrier Dec 16 13:10:58.709750 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:10:58.715777 systemd-resolved[1439]: Positive Trust Anchors: Dec 16 13:10:58.715795 systemd-resolved[1439]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:10:58.715822 systemd-resolved[1439]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:10:58.715850 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:10:58.719731 systemd-resolved[1439]: Defaulting to hostname 'linux'. Dec 16 13:10:58.720232 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:10:58.724072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:10:58.724834 systemd[1]: Reached target network.target - Network. Dec 16 13:10:58.726077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:10:58.733302 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:10:58.734829 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:10:58.758283 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:10:58.759109 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:10:58.759863 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:10:58.760615 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:10:58.761380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:10:58.761414 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:10:58.762095 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:10:58.763001 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:10:58.763842 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:10:58.764603 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:10:58.766655 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:10:58.769115 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:10:58.771800 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:10:58.772783 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:10:58.773562 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:10:58.776674 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:10:58.778017 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:10:58.779742 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:10:58.780701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:10:58.782875 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:10:58.783615 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:10:58.784570 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:10:58.784614 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:10:58.785778 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:10:58.789104 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:10:58.794668 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:10:58.798156 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:10:58.803497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:10:58.808159 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:10:58.811057 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:10:58.812268 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:10:58.819249 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:10:58.823763 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:10:58.834323 jq[1517]: false Dec 16 13:10:58.835166 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:10:58.839720 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:10:58.852293 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:10:58.854882 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:10:58.855370 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:10:58.856280 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 16 13:10:58.857031 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 16 13:10:58.858181 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:10:58.861625 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 16 13:10:58.862879 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:10:58.866067 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:10:58.866067 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 16 13:10:58.866067 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 16 13:10:58.866067 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:10:58.864022 oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 16 13:10:58.864042 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:10:58.864086 oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 16 13:10:58.864557 oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 16 13:10:58.864567 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:10:58.867167 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:10:58.870481 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:10:58.870724 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:10:58.871174 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:10:58.871405 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:10:58.884875 coreos-metadata[1514]: Dec 16 13:10:58.884 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 13:10:58.887202 extend-filesystems[1518]: Found /dev/sda6 Dec 16 13:10:58.898789 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:10:58.901277 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:10:58.907369 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:10:58.920454 jq[1533]: true Dec 16 13:10:58.921040 extend-filesystems[1518]: Found /dev/sda9 Dec 16 13:10:58.937142 extend-filesystems[1518]: Checking size of /dev/sda9 Dec 16 13:10:58.941094 tar[1537]: linux-amd64/LICENSE Dec 16 13:10:58.942383 tar[1537]: linux-amd64/helm Dec 16 13:10:58.946932 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:10:58.948075 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:10:58.961192 update_engine[1529]: I20251216 13:10:58.960980 1529 main.cc:92] Flatcar Update Engine starting Dec 16 13:10:58.973519 extend-filesystems[1518]: Resized partition /dev/sda9 Dec 16 13:10:58.972944 dbus-daemon[1515]: [system] SELinux support is enabled Dec 16 13:10:58.984259 jq[1555]: true Dec 16 13:10:58.973832 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:10:58.986431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:10:58.986474 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:10:58.987582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:10:58.987604 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:10:58.995413 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:10:58.999315 update_engine[1529]: I20251216 13:10:58.997666 1529 update_check_scheduler.cc:74] Next update check in 4m54s Dec 16 13:10:58.997910 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:10:59.009007 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 16 13:10:59.017549 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:10:59.127219 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:10:59.130541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:10:59.131234 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:10:59.131262 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:10:59.134088 systemd-logind[1526]: New seat seat0. Dec 16 13:10:59.153605 systemd[1]: Starting sshkeys.service... Dec 16 13:10:59.155478 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:10:59.220982 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:10:59.227770 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:10:59.256018 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 16 13:10:59.270929 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:10:59.270929 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 16 13:10:59.270929 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 16 13:10:59.283220 extend-filesystems[1518]: Resized filesystem in /dev/sda9 Dec 16 13:10:59.271840 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:10:59.277902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:10:59.370448 sshd_keygen[1538]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:10:59.404409 systemd-networkd[1436]: eth0: DHCPv4 address 172.239.197.230/24, gateway 172.239.197.1 acquired from 23.210.200.12 Dec 16 13:10:59.405369 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Dec 16 13:10:59.409899 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1436 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:10:59.418067 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:10:59.420751 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:10:59.429714 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:10:59.449098 containerd[1546]: time="2025-12-16T13:10:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:10:59.449098 containerd[1546]: time="2025-12-16T13:10:59.447793550Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:10:59.449392 coreos-metadata[1588]: Dec 16 13:10:59.448 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 13:10:59.462674 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:10:59.463431 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:10:59.469191 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:10:59.471759 containerd[1546]: time="2025-12-16T13:10:59.471697458Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="39.91µs" Dec 16 13:10:59.471759 containerd[1546]: time="2025-12-16T13:10:59.471756008Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:10:59.471830 containerd[1546]: time="2025-12-16T13:10:59.471781878Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:10:59.472054 containerd[1546]: time="2025-12-16T13:10:59.472019859Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:10:59.472087 containerd[1546]: time="2025-12-16T13:10:59.472057599Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:10:59.472109 containerd[1546]: time="2025-12-16T13:10:59.472093099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:10:59.472202 containerd[1546]: time="2025-12-16T13:10:59.472170119Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:10:59.472202 containerd[1546]: time="2025-12-16T13:10:59.472192819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472432800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472455870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472472000Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472483880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472602640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472857330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472888801Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472902241Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.472934541Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.473199671Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:10:59.473311 containerd[1546]: time="2025-12-16T13:10:59.473268961Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.480974627Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481051127Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481068237Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481082367Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481143037Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481155727Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481171147Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481183197Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481194177Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481204447Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481216347Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:10:59.481228 containerd[1546]: time="2025-12-16T13:10:59.481233337Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481345807Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481367337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481382688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481401208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481427618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481440528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481451468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:10:59.481459 containerd[1546]: time="2025-12-16T13:10:59.481462088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:10:59.482337 containerd[1546]: time="2025-12-16T13:10:59.481475418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:10:59.482337 containerd[1546]: time="2025-12-16T13:10:59.481486418Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:10:59.482337 containerd[1546]: time="2025-12-16T13:10:59.481496758Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:10:59.482337 containerd[1546]: time="2025-12-16T13:10:59.481545188Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:10:59.482337 containerd[1546]: time="2025-12-16T13:10:59.481559388Z" level=info msg="Start snapshots syncer" Dec 16 13:10:59.482337 containerd[1546]: time="2025-12-16T13:10:59.481637388Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:10:59.482752 containerd[1546]: time="2025-12-16T13:10:59.482694940Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:10:59.482752 containerd[1546]: time="2025-12-16T13:10:59.482751570Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:10:59.483250 containerd[1546]: time="2025-12-16T13:10:59.483183391Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483346231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483386972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483399762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483410342Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483425292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483436642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483448562Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483476162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483488992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:10:59.483527 containerd[1546]: time="2025-12-16T13:10:59.483500312Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.483885733Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.483966723Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.483977923Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484008593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484018223Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484028823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484047943Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484066523Z" level=info msg="runtime interface created" Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484072533Z" level=info msg="created NRI interface" Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484081683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484092483Z" level=info msg="Connect containerd service" Dec 16 13:10:59.484141 containerd[1546]: time="2025-12-16T13:10:59.484112403Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:10:59.485554 containerd[1546]: time="2025-12-16T13:10:59.485510576Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:10:59.505855 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:10:59.508861 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:10:59.512093 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:10:59.514243 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:10:59.544836 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:10:59.554408 coreos-metadata[1588]: Dec 16 13:10:59.552 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 16 13:10:59.565952 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:10:59.569784 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:10:59.570265 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1605 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:11:00.019079 systemd-resolved[1439]: Clock change detected. Flushing caches. Dec 16 13:11:00.019217 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:11:00.020411 systemd-timesyncd[1459]: Contacted time server 23.186.168.127:123 (0.flatcar.pool.ntp.org). Dec 16 13:11:00.020470 systemd-timesyncd[1459]: Initial clock synchronization to Tue 2025-12-16 13:11:00.018885 UTC. Dec 16 13:11:00.056232 containerd[1546]: time="2025-12-16T13:11:00.055823306Z" level=info msg="Start subscribing containerd event" Dec 16 13:11:00.056501 containerd[1546]: time="2025-12-16T13:11:00.056240036Z" level=info msg="Start recovering state" Dec 16 13:11:00.057179 containerd[1546]: time="2025-12-16T13:11:00.056944708Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:11:00.057409 containerd[1546]: time="2025-12-16T13:11:00.057016148Z" level=info msg="Start event monitor" Dec 16 13:11:00.057409 containerd[1546]: time="2025-12-16T13:11:00.057408649Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:11:00.057458 containerd[1546]: time="2025-12-16T13:11:00.057418289Z" level=info msg="Start streaming server" Dec 16 13:11:00.057458 containerd[1546]: time="2025-12-16T13:11:00.057429779Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:11:00.057458 containerd[1546]: time="2025-12-16T13:11:00.057437679Z" level=info msg="runtime interface starting up..." Dec 16 13:11:00.058060 containerd[1546]: time="2025-12-16T13:11:00.058028940Z" level=info msg="starting plugins..." Dec 16 13:11:00.058060 containerd[1546]: time="2025-12-16T13:11:00.058059840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:11:00.058163 containerd[1546]: time="2025-12-16T13:11:00.058013500Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:11:00.058833 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:11:00.059638 containerd[1546]: time="2025-12-16T13:11:00.059610433Z" level=info msg="containerd successfully booted in 0.172314s" Dec 16 13:11:00.109061 polkitd[1630]: Started polkitd version 126 Dec 16 13:11:00.113726 polkitd[1630]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:11:00.114304 polkitd[1630]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:11:00.114635 polkitd[1630]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:11:00.114900 polkitd[1630]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:11:00.114967 polkitd[1630]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:11:00.115044 polkitd[1630]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:11:00.115992 polkitd[1630]: Finished loading, compiling and executing 2 rules Dec 16 13:11:00.117649 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:11:00.119004 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:11:00.120066 polkitd[1630]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:11:00.132665 coreos-metadata[1588]: Dec 16 13:11:00.132 INFO Fetch successful Dec 16 13:11:00.141020 systemd-hostnamed[1605]: Hostname set to <172-239-197-230> (transient) Dec 16 13:11:00.141137 systemd-resolved[1439]: System hostname changed to '172-239-197-230'. Dec 16 13:11:00.155395 update-ssh-keys[1643]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:11:00.157365 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:11:00.164293 systemd[1]: Finished sshkeys.service. Dec 16 13:11:00.176968 tar[1537]: linux-amd64/README.md Dec 16 13:11:00.196215 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:11:00.336948 coreos-metadata[1514]: Dec 16 13:11:00.336 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 16 13:11:00.454014 coreos-metadata[1514]: Dec 16 13:11:00.453 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 16 13:11:00.639338 coreos-metadata[1514]: Dec 16 13:11:00.639 INFO Fetch successful Dec 16 13:11:00.639338 coreos-metadata[1514]: Dec 16 13:11:00.639 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 16 13:11:00.824129 systemd-networkd[1436]: eth0: Gained IPv6LL Dec 16 13:11:00.827901 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:11:00.829337 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:11:00.832828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:00.837945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:11:00.860890 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:11:00.903226 coreos-metadata[1514]: Dec 16 13:11:00.902 INFO Fetch successful Dec 16 13:11:01.019009 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:11:01.021757 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:11:01.832115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:01.834105 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:11:01.837940 systemd[1]: Startup finished in 2.912s (kernel) + 8.242s (initrd) + 5.553s (userspace) = 16.709s. Dec 16 13:11:01.904424 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:11:02.390372 kubelet[1688]: E1216 13:11:02.390318 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:11:02.394113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:11:02.394322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:11:02.394846 systemd[1]: kubelet.service: Consumed 878ms CPU time, 257.7M memory peak. Dec 16 13:11:03.583072 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:11:03.585071 systemd[1]: Started sshd@0-172.239.197.230:22-139.178.89.65:36226.service - OpenSSH per-connection server daemon (139.178.89.65:36226). Dec 16 13:11:03.959577 sshd[1700]: Accepted publickey for core from 139.178.89.65 port 36226 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:03.961838 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:03.970812 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:11:03.972796 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:11:03.986800 systemd-logind[1526]: New session 1 of user core. Dec 16 13:11:03.996168 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:11:04.001348 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:11:04.018707 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:11:04.021575 systemd-logind[1526]: New session c1 of user core. Dec 16 13:11:04.165889 systemd[1705]: Queued start job for default target default.target. Dec 16 13:11:04.177285 systemd[1705]: Created slice app.slice - User Application Slice. Dec 16 13:11:04.177328 systemd[1705]: Reached target paths.target - Paths. Dec 16 13:11:04.177389 systemd[1705]: Reached target timers.target - Timers. Dec 16 13:11:04.179351 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:11:04.191923 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:11:04.192007 systemd[1705]: Reached target sockets.target - Sockets. Dec 16 13:11:04.192089 systemd[1705]: Reached target basic.target - Basic System. Dec 16 13:11:04.192161 systemd[1705]: Reached target default.target - Main User Target. Dec 16 13:11:04.192206 systemd[1705]: Startup finished in 163ms. Dec 16 13:11:04.192560 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:11:04.200057 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:11:04.473201 systemd[1]: Started sshd@1-172.239.197.230:22-139.178.89.65:36228.service - OpenSSH per-connection server daemon (139.178.89.65:36228). Dec 16 13:11:04.850598 sshd[1716]: Accepted publickey for core from 139.178.89.65 port 36228 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:04.852456 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:04.860167 systemd-logind[1526]: New session 2 of user core. Dec 16 13:11:04.867101 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:11:05.120265 sshd[1719]: Connection closed by 139.178.89.65 port 36228 Dec 16 13:11:05.121200 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:05.125681 systemd[1]: sshd@1-172.239.197.230:22-139.178.89.65:36228.service: Deactivated successfully. Dec 16 13:11:05.127624 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:11:05.128390 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:11:05.130073 systemd-logind[1526]: Removed session 2. Dec 16 13:11:05.186713 systemd[1]: Started sshd@2-172.239.197.230:22-139.178.89.65:36236.service - OpenSSH per-connection server daemon (139.178.89.65:36236). Dec 16 13:11:05.538629 sshd[1725]: Accepted publickey for core from 139.178.89.65 port 36236 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:05.540362 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:05.545405 systemd-logind[1526]: New session 3 of user core. Dec 16 13:11:05.547974 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:11:05.786903 sshd[1728]: Connection closed by 139.178.89.65 port 36236 Dec 16 13:11:05.787513 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:05.792662 systemd[1]: sshd@2-172.239.197.230:22-139.178.89.65:36236.service: Deactivated successfully. Dec 16 13:11:05.794479 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:11:05.795273 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:11:05.796647 systemd-logind[1526]: Removed session 3. Dec 16 13:11:05.852025 systemd[1]: Started sshd@3-172.239.197.230:22-139.178.89.65:36248.service - OpenSSH per-connection server daemon (139.178.89.65:36248). Dec 16 13:11:06.204306 sshd[1734]: Accepted publickey for core from 139.178.89.65 port 36248 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:06.206089 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:06.212048 systemd-logind[1526]: New session 4 of user core. Dec 16 13:11:06.219978 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:11:06.459333 sshd[1737]: Connection closed by 139.178.89.65 port 36248 Dec 16 13:11:06.460341 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:06.465566 systemd[1]: sshd@3-172.239.197.230:22-139.178.89.65:36248.service: Deactivated successfully. Dec 16 13:11:06.468136 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:11:06.469265 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:11:06.471169 systemd-logind[1526]: Removed session 4. Dec 16 13:11:06.524169 systemd[1]: Started sshd@4-172.239.197.230:22-139.178.89.65:36256.service - OpenSSH per-connection server daemon (139.178.89.65:36256). Dec 16 13:11:06.874959 sshd[1743]: Accepted publickey for core from 139.178.89.65 port 36256 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:06.876325 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:06.881912 systemd-logind[1526]: New session 5 of user core. Dec 16 13:11:06.893006 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:11:07.077703 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:11:07.078204 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:07.093832 sudo[1747]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:07.144263 sshd[1746]: Connection closed by 139.178.89.65 port 36256 Dec 16 13:11:07.145309 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:07.149746 systemd[1]: sshd@4-172.239.197.230:22-139.178.89.65:36256.service: Deactivated successfully. Dec 16 13:11:07.151450 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:11:07.152329 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:11:07.154184 systemd-logind[1526]: Removed session 5. Dec 16 13:11:07.213799 systemd[1]: Started sshd@5-172.239.197.230:22-139.178.89.65:36268.service - OpenSSH per-connection server daemon (139.178.89.65:36268). Dec 16 13:11:07.585310 sshd[1753]: Accepted publickey for core from 139.178.89.65 port 36268 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:07.587092 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:07.592960 systemd-logind[1526]: New session 6 of user core. Dec 16 13:11:07.605989 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:11:07.790878 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:11:07.791213 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:07.796408 sudo[1758]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:07.803252 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:11:07.803774 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:07.813560 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:11:07.852849 augenrules[1780]: No rules Dec 16 13:11:07.853373 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:11:07.853685 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:11:07.854630 sudo[1757]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:07.907739 sshd[1756]: Connection closed by 139.178.89.65 port 36268 Dec 16 13:11:07.908226 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:07.912630 systemd[1]: sshd@5-172.239.197.230:22-139.178.89.65:36268.service: Deactivated successfully. Dec 16 13:11:07.915042 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:11:07.915882 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:11:07.917460 systemd-logind[1526]: Removed session 6. Dec 16 13:11:07.975045 systemd[1]: Started sshd@6-172.239.197.230:22-139.178.89.65:36272.service - OpenSSH per-connection server daemon (139.178.89.65:36272). Dec 16 13:11:08.332895 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 36272 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:11:08.334984 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:11:08.339362 systemd-logind[1526]: New session 7 of user core. Dec 16 13:11:08.346976 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:11:08.540008 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:11:08.540325 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:11:08.815618 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:11:08.826210 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:11:09.048983 dockerd[1812]: time="2025-12-16T13:11:09.047660795Z" level=info msg="Starting up" Dec 16 13:11:09.049791 dockerd[1812]: time="2025-12-16T13:11:09.049759829Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:11:09.066318 dockerd[1812]: time="2025-12-16T13:11:09.066159692Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:11:09.109691 dockerd[1812]: time="2025-12-16T13:11:09.109664869Z" level=info msg="Loading containers: start." Dec 16 13:11:09.119883 kernel: Initializing XFRM netlink socket Dec 16 13:11:09.386577 systemd-networkd[1436]: docker0: Link UP Dec 16 13:11:09.389483 dockerd[1812]: time="2025-12-16T13:11:09.389442509Z" level=info msg="Loading containers: done." Dec 16 13:11:09.404757 dockerd[1812]: time="2025-12-16T13:11:09.404550919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:11:09.404923 dockerd[1812]: time="2025-12-16T13:11:09.404780689Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:11:09.404923 dockerd[1812]: time="2025-12-16T13:11:09.404906740Z" level=info msg="Initializing buildkit" Dec 16 13:11:09.429296 dockerd[1812]: time="2025-12-16T13:11:09.429260558Z" level=info msg="Completed buildkit initialization" Dec 16 13:11:09.436967 dockerd[1812]: time="2025-12-16T13:11:09.436907084Z" level=info msg="Daemon has completed initialization" Dec 16 13:11:09.437121 dockerd[1812]: time="2025-12-16T13:11:09.437047154Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:11:09.437402 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:11:09.961929 containerd[1546]: time="2025-12-16T13:11:09.961894433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:11:10.682493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259648057.mount: Deactivated successfully. Dec 16 13:11:11.743557 containerd[1546]: time="2025-12-16T13:11:11.743516876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:11.744467 containerd[1546]: time="2025-12-16T13:11:11.744444668Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 16 13:11:11.745877 containerd[1546]: time="2025-12-16T13:11:11.744835208Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:11.746915 containerd[1546]: time="2025-12-16T13:11:11.746889792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:11.748289 containerd[1546]: time="2025-12-16T13:11:11.748253145Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.786325472s" Dec 16 13:11:11.748289 containerd[1546]: time="2025-12-16T13:11:11.748279175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:11:11.748828 containerd[1546]: time="2025-12-16T13:11:11.748800796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:11:12.645259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:11:12.647999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:12.835269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:12.843215 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:11:12.887808 kubelet[2090]: E1216 13:11:12.887765 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:11:12.893147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:11:12.893343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:11:12.894390 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.4M memory peak. Dec 16 13:11:13.118030 containerd[1546]: time="2025-12-16T13:11:13.117889184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:13.119171 containerd[1546]: time="2025-12-16T13:11:13.119130086Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 16 13:11:13.120689 containerd[1546]: time="2025-12-16T13:11:13.119633897Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:13.124404 containerd[1546]: time="2025-12-16T13:11:13.124371337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:13.126007 containerd[1546]: time="2025-12-16T13:11:13.125975160Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.377142264s" Dec 16 13:11:13.126043 containerd[1546]: time="2025-12-16T13:11:13.126007760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:11:13.126427 containerd[1546]: time="2025-12-16T13:11:13.126403481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:11:13.862578 systemd[1]: Started sshd@7-172.239.197.230:22-14.63.166.251:48467.service - OpenSSH per-connection server daemon (14.63.166.251:48467). Dec 16 13:11:14.184174 containerd[1546]: time="2025-12-16T13:11:14.183919815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:14.184928 containerd[1546]: time="2025-12-16T13:11:14.184686347Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 16 13:11:14.185373 containerd[1546]: time="2025-12-16T13:11:14.185345878Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:14.187990 containerd[1546]: time="2025-12-16T13:11:14.187964373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:14.188879 containerd[1546]: time="2025-12-16T13:11:14.188842295Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.062359394s" Dec 16 13:11:14.188946 containerd[1546]: time="2025-12-16T13:11:14.188931085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:11:14.189999 containerd[1546]: time="2025-12-16T13:11:14.189487527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:11:14.306540 sshd[2098]: Connection closed by 14.63.166.251 port 48467 [preauth] Dec 16 13:11:14.307960 systemd[1]: sshd@7-172.239.197.230:22-14.63.166.251:48467.service: Deactivated successfully. Dec 16 13:11:15.333139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855104091.mount: Deactivated successfully. Dec 16 13:11:15.644655 containerd[1546]: time="2025-12-16T13:11:15.644457916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:15.645311 containerd[1546]: time="2025-12-16T13:11:15.645270027Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:11:15.646877 containerd[1546]: time="2025-12-16T13:11:15.646355240Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:15.649068 containerd[1546]: time="2025-12-16T13:11:15.649015035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:15.649574 containerd[1546]: time="2025-12-16T13:11:15.649551846Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.460039499s" Dec 16 13:11:15.649644 containerd[1546]: time="2025-12-16T13:11:15.649630396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:11:15.650531 containerd[1546]: time="2025-12-16T13:11:15.650490228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:11:16.367026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097065738.mount: Deactivated successfully. Dec 16 13:11:17.184240 containerd[1546]: time="2025-12-16T13:11:17.184181875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:17.185162 containerd[1546]: time="2025-12-16T13:11:17.185119626Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 16 13:11:17.186617 containerd[1546]: time="2025-12-16T13:11:17.185596947Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:17.188000 containerd[1546]: time="2025-12-16T13:11:17.187977712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:17.188751 containerd[1546]: time="2025-12-16T13:11:17.188724314Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.538186896s" Dec 16 13:11:17.188791 containerd[1546]: time="2025-12-16T13:11:17.188752784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:11:17.189701 containerd[1546]: time="2025-12-16T13:11:17.189676466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:11:17.798104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133278934.mount: Deactivated successfully. Dec 16 13:11:17.804338 containerd[1546]: time="2025-12-16T13:11:17.804195784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:17.804969 containerd[1546]: time="2025-12-16T13:11:17.804908226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 16 13:11:17.805931 containerd[1546]: time="2025-12-16T13:11:17.805535027Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:17.807517 containerd[1546]: time="2025-12-16T13:11:17.807495941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:17.808364 containerd[1546]: time="2025-12-16T13:11:17.808335373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 618.631007ms" Dec 16 13:11:17.808415 containerd[1546]: time="2025-12-16T13:11:17.808370493Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:11:17.809270 containerd[1546]: time="2025-12-16T13:11:17.809160264Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:11:18.490426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584006172.mount: Deactivated successfully. Dec 16 13:11:20.477927 containerd[1546]: time="2025-12-16T13:11:20.477282309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:20.478456 containerd[1546]: time="2025-12-16T13:11:20.478347701Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 16 13:11:20.479245 containerd[1546]: time="2025-12-16T13:11:20.479222443Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:20.481726 containerd[1546]: time="2025-12-16T13:11:20.481688648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:20.482889 containerd[1546]: time="2025-12-16T13:11:20.482652510Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.673431676s" Dec 16 13:11:20.482969 containerd[1546]: time="2025-12-16T13:11:20.482952491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:11:23.146657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:11:23.156072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:23.348014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:23.358184 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:11:23.409757 kubelet[2254]: E1216 13:11:23.409655 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:11:23.415901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:11:23.416412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:11:23.417451 systemd[1]: kubelet.service: Consumed 211ms CPU time, 107.8M memory peak. Dec 16 13:11:24.316155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:24.316316 systemd[1]: kubelet.service: Consumed 211ms CPU time, 107.8M memory peak. Dec 16 13:11:24.319752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:24.350190 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-7.scope)... Dec 16 13:11:24.350202 systemd[1]: Reloading... Dec 16 13:11:24.496912 zram_generator::config[2312]: No configuration found. Dec 16 13:11:24.726936 systemd[1]: Reloading finished in 376 ms. Dec 16 13:11:24.802930 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:11:24.803050 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:11:24.803410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:24.803467 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.2M memory peak. Dec 16 13:11:24.806039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:24.992212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:25.001193 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:11:25.036992 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:11:25.036992 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:11:25.037276 kubelet[2366]: I1216 13:11:25.037040 2366 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:11:25.562149 kubelet[2366]: I1216 13:11:25.562087 2366 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:11:25.562149 kubelet[2366]: I1216 13:11:25.562137 2366 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:11:25.562324 kubelet[2366]: I1216 13:11:25.562180 2366 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:11:25.562324 kubelet[2366]: I1216 13:11:25.562202 2366 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:11:25.563021 kubelet[2366]: I1216 13:11:25.562989 2366 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:11:25.570803 kubelet[2366]: E1216 13:11:25.570750 2366 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.197.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:11:25.570947 kubelet[2366]: I1216 13:11:25.570912 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:11:25.576415 kubelet[2366]: I1216 13:11:25.576327 2366 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:11:25.580916 kubelet[2366]: I1216 13:11:25.580421 2366 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:11:25.581569 kubelet[2366]: I1216 13:11:25.581503 2366 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:11:25.581952 kubelet[2366]: I1216 13:11:25.581539 2366 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-197-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:11:25.581952 kubelet[2366]: I1216 13:11:25.581683 2366 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:11:25.581952 kubelet[2366]: I1216 13:11:25.581691 2366 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:11:25.582262 kubelet[2366]: I1216 13:11:25.581987 2366 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:11:25.583689 kubelet[2366]: I1216 13:11:25.583643 2366 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:25.583836 kubelet[2366]: I1216 13:11:25.583818 2366 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:11:25.583903 kubelet[2366]: I1216 13:11:25.583845 2366 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:11:25.583903 kubelet[2366]: I1216 13:11:25.583892 2366 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:11:25.584913 kubelet[2366]: I1216 13:11:25.583915 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:11:25.587296 kubelet[2366]: I1216 13:11:25.587264 2366 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:11:25.587923 kubelet[2366]: I1216 13:11:25.587680 2366 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:11:25.587923 kubelet[2366]: I1216 13:11:25.587917 2366 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:11:25.588026 kubelet[2366]: W1216 13:11:25.587965 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:11:25.592050 kubelet[2366]: I1216 13:11:25.592021 2366 server.go:1262] "Started kubelet" Dec 16 13:11:25.592225 kubelet[2366]: E1216 13:11:25.592180 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.197.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:11:25.592318 kubelet[2366]: E1216 13:11:25.592285 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.197.230:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-197-230&limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:11:25.594290 kubelet[2366]: I1216 13:11:25.594261 2366 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:11:25.598706 kubelet[2366]: I1216 13:11:25.598663 2366 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:11:25.598882 kubelet[2366]: I1216 13:11:25.598714 2366 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:11:25.599034 kubelet[2366]: I1216 13:11:25.599004 2366 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:11:25.600234 kubelet[2366]: I1216 13:11:25.600213 2366 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:11:25.604716 kubelet[2366]: I1216 13:11:25.604683 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:11:25.607539 kubelet[2366]: I1216 13:11:25.607496 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:11:25.607994 kubelet[2366]: I1216 13:11:25.607962 2366 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:11:25.608215 kubelet[2366]: E1216 13:11:25.608188 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:25.608745 kubelet[2366]: I1216 13:11:25.608706 2366 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:11:25.608801 kubelet[2366]: I1216 13:11:25.608753 2366 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:11:25.609479 kubelet[2366]: E1216 13:11:25.609435 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-230?timeout=10s\": dial tcp 172.239.197.230:6443: connect: connection refused" interval="200ms" Dec 16 13:11:25.612615 kubelet[2366]: E1216 13:11:25.612205 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.197.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:11:25.613636 kubelet[2366]: E1216 13:11:25.612501 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.197.230:6443/api/v1/namespaces/default/events\": dial tcp 172.239.197.230:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-197-230.1881b438f2e1d89a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-197-230,UID:172-239-197-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-197-230,},FirstTimestamp:2025-12-16 13:11:25.592000666 +0000 UTC m=+0.587104115,LastTimestamp:2025-12-16 13:11:25.592000666 +0000 UTC m=+0.587104115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-197-230,}" Dec 16 13:11:25.614043 kubelet[2366]: I1216 13:11:25.614002 2366 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:11:25.614118 kubelet[2366]: I1216 13:11:25.614079 2366 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:11:25.616028 kubelet[2366]: I1216 13:11:25.615999 2366 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:11:25.631435 kubelet[2366]: I1216 13:11:25.631406 2366 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:11:25.632873 kubelet[2366]: I1216 13:11:25.632842 2366 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:11:25.632961 kubelet[2366]: I1216 13:11:25.632951 2366 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:11:25.633038 kubelet[2366]: I1216 13:11:25.633028 2366 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:11:25.633141 kubelet[2366]: E1216 13:11:25.633124 2366 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:11:25.638381 kubelet[2366]: E1216 13:11:25.638359 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.197.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:11:25.641298 kubelet[2366]: E1216 13:11:25.641239 2366 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:11:25.648221 kubelet[2366]: I1216 13:11:25.648204 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:11:25.648600 kubelet[2366]: I1216 13:11:25.648466 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:11:25.648600 kubelet[2366]: I1216 13:11:25.648486 2366 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:25.650750 kubelet[2366]: I1216 13:11:25.650727 2366 policy_none.go:49] "None policy: Start" Dec 16 13:11:25.650832 kubelet[2366]: I1216 13:11:25.650821 2366 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:11:25.650910 kubelet[2366]: I1216 13:11:25.650899 2366 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:11:25.651712 kubelet[2366]: I1216 13:11:25.651657 2366 policy_none.go:47] "Start" Dec 16 13:11:25.656906 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:11:25.669785 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:11:25.674317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:11:25.685257 kubelet[2366]: E1216 13:11:25.685070 2366 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:11:25.685827 kubelet[2366]: I1216 13:11:25.685793 2366 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:11:25.686094 kubelet[2366]: I1216 13:11:25.685815 2366 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:11:25.687158 kubelet[2366]: I1216 13:11:25.687131 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:11:25.688571 kubelet[2366]: E1216 13:11:25.688300 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:11:25.688571 kubelet[2366]: E1216 13:11:25.688385 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-197-230\" not found" Dec 16 13:11:25.747347 systemd[1]: Created slice kubepods-burstable-podab340b9a9b40ee7999202d0401cb8fd4.slice - libcontainer container kubepods-burstable-podab340b9a9b40ee7999202d0401cb8fd4.slice. Dec 16 13:11:25.756819 kubelet[2366]: E1216 13:11:25.756793 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:25.759461 systemd[1]: Created slice kubepods-burstable-pod0081f2d6a121b8f958f066ecf2424673.slice - libcontainer container kubepods-burstable-pod0081f2d6a121b8f958f066ecf2424673.slice. Dec 16 13:11:25.768122 kubelet[2366]: E1216 13:11:25.768083 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:25.771363 systemd[1]: Created slice kubepods-burstable-pode02ea800c29c47f03c2f6a231e090398.slice - libcontainer container kubepods-burstable-pode02ea800c29c47f03c2f6a231e090398.slice. Dec 16 13:11:25.774107 kubelet[2366]: E1216 13:11:25.774077 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:25.788745 kubelet[2366]: I1216 13:11:25.788425 2366 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-230" Dec 16 13:11:25.789121 kubelet[2366]: E1216 13:11:25.789081 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.197.230:6443/api/v1/nodes\": dial tcp 172.239.197.230:6443: connect: connection refused" node="172-239-197-230" Dec 16 13:11:25.810555 kubelet[2366]: I1216 13:11:25.810529 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-ca-certs\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:25.810654 kubelet[2366]: E1216 13:11:25.810611 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-230?timeout=10s\": dial tcp 172.239.197.230:6443: connect: connection refused" interval="400ms" Dec 16 13:11:25.810688 kubelet[2366]: I1216 13:11:25.810675 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:25.810812 kubelet[2366]: I1216 13:11:25.810698 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-flexvolume-dir\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:25.810812 kubelet[2366]: I1216 13:11:25.810714 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-k8s-certs\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:25.810812 kubelet[2366]: I1216 13:11:25.810749 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-kubeconfig\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:25.810812 kubelet[2366]: I1216 13:11:25.810776 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e02ea800c29c47f03c2f6a231e090398-kubeconfig\") pod \"kube-scheduler-172-239-197-230\" (UID: \"e02ea800c29c47f03c2f6a231e090398\") " pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:25.810812 kubelet[2366]: I1216 13:11:25.810791 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab340b9a9b40ee7999202d0401cb8fd4-ca-certs\") pod \"kube-apiserver-172-239-197-230\" (UID: \"ab340b9a9b40ee7999202d0401cb8fd4\") " pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:25.810977 kubelet[2366]: I1216 13:11:25.810805 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab340b9a9b40ee7999202d0401cb8fd4-k8s-certs\") pod \"kube-apiserver-172-239-197-230\" (UID: \"ab340b9a9b40ee7999202d0401cb8fd4\") " pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:25.810977 kubelet[2366]: I1216 13:11:25.810818 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab340b9a9b40ee7999202d0401cb8fd4-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-197-230\" (UID: \"ab340b9a9b40ee7999202d0401cb8fd4\") " pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:25.991307 kubelet[2366]: I1216 13:11:25.991207 2366 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-230" Dec 16 13:11:25.991661 kubelet[2366]: E1216 13:11:25.991612 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.197.230:6443/api/v1/nodes\": dial tcp 172.239.197.230:6443: connect: connection refused" node="172-239-197-230" Dec 16 13:11:26.060201 kubelet[2366]: E1216 13:11:26.060163 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:26.062302 containerd[1546]: time="2025-12-16T13:11:26.061194595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-197-230,Uid:ab340b9a9b40ee7999202d0401cb8fd4,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:26.070897 kubelet[2366]: E1216 13:11:26.070869 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:26.071371 containerd[1546]: time="2025-12-16T13:11:26.071331585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-197-230,Uid:0081f2d6a121b8f958f066ecf2424673,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:26.075600 kubelet[2366]: E1216 13:11:26.075581 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:26.076105 containerd[1546]: time="2025-12-16T13:11:26.076085364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-197-230,Uid:e02ea800c29c47f03c2f6a231e090398,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:26.212006 kubelet[2366]: E1216 13:11:26.211946 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-230?timeout=10s\": dial tcp 172.239.197.230:6443: connect: connection refused" interval="800ms" Dec 16 13:11:26.393558 kubelet[2366]: I1216 13:11:26.393508 2366 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-230" Dec 16 13:11:26.394146 kubelet[2366]: E1216 13:11:26.394084 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.197.230:6443/api/v1/nodes\": dial tcp 172.239.197.230:6443: connect: connection refused" node="172-239-197-230" Dec 16 13:11:26.628436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538499958.mount: Deactivated successfully. Dec 16 13:11:26.632899 containerd[1546]: time="2025-12-16T13:11:26.632843128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:26.634121 containerd[1546]: time="2025-12-16T13:11:26.634097650Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:26.635150 containerd[1546]: time="2025-12-16T13:11:26.635129912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:11:26.635523 containerd[1546]: time="2025-12-16T13:11:26.635503543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:11:26.636960 containerd[1546]: time="2025-12-16T13:11:26.636939726Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:26.638143 containerd[1546]: time="2025-12-16T13:11:26.638103428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:26.639033 containerd[1546]: time="2025-12-16T13:11:26.638941980Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:11:26.639439 containerd[1546]: time="2025-12-16T13:11:26.639419531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:11:26.640156 containerd[1546]: time="2025-12-16T13:11:26.640130842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 563.364416ms" Dec 16 13:11:26.641117 containerd[1546]: time="2025-12-16T13:11:26.640996704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.078696ms" Dec 16 13:11:26.642561 containerd[1546]: time="2025-12-16T13:11:26.642541307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 570.11881ms" Dec 16 13:11:26.691751 containerd[1546]: time="2025-12-16T13:11:26.690200942Z" level=info msg="connecting to shim b9d52420e49ebaad5c5806632059f6f1839c28481bc349377d93703a9a6eb3ed" address="unix:///run/containerd/s/e6f4f0646957a5f343a90788cd078963233b01c2f8d312eaf2e003903ab95eab" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:26.694600 containerd[1546]: time="2025-12-16T13:11:26.694018630Z" level=info msg="connecting to shim 88cb53c08628a17db72a087cd80915b98afc3f15b02257f3960884ed9c91711d" address="unix:///run/containerd/s/e556d46b20500ac4d6f9d039f79c2ebe1208d3d53e8c526abc71c33897fb2a41" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:26.695450 containerd[1546]: time="2025-12-16T13:11:26.695430673Z" level=info msg="connecting to shim cffd9a47099be41e3c70b94f4143a82aa445c7249dd7e5c4ff7004c19f5919b6" address="unix:///run/containerd/s/15509632fc7d4cddf1e91df2343d21634b87d9464c208d3e68b05b545ae4685d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:26.728976 systemd[1]: Started cri-containerd-cffd9a47099be41e3c70b94f4143a82aa445c7249dd7e5c4ff7004c19f5919b6.scope - libcontainer container cffd9a47099be41e3c70b94f4143a82aa445c7249dd7e5c4ff7004c19f5919b6. Dec 16 13:11:26.737011 systemd[1]: Started cri-containerd-88cb53c08628a17db72a087cd80915b98afc3f15b02257f3960884ed9c91711d.scope - libcontainer container 88cb53c08628a17db72a087cd80915b98afc3f15b02257f3960884ed9c91711d. Dec 16 13:11:26.738905 systemd[1]: Started cri-containerd-b9d52420e49ebaad5c5806632059f6f1839c28481bc349377d93703a9a6eb3ed.scope - libcontainer container b9d52420e49ebaad5c5806632059f6f1839c28481bc349377d93703a9a6eb3ed. Dec 16 13:11:26.835154 kubelet[2366]: E1216 13:11:26.834450 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.197.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:11:26.841944 containerd[1546]: time="2025-12-16T13:11:26.841898286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-197-230,Uid:ab340b9a9b40ee7999202d0401cb8fd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cffd9a47099be41e3c70b94f4143a82aa445c7249dd7e5c4ff7004c19f5919b6\"" Dec 16 13:11:26.843343 kubelet[2366]: E1216 13:11:26.843151 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:26.845093 containerd[1546]: time="2025-12-16T13:11:26.845057052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-197-230,Uid:0081f2d6a121b8f958f066ecf2424673,Namespace:kube-system,Attempt:0,} returns sandbox id \"88cb53c08628a17db72a087cd80915b98afc3f15b02257f3960884ed9c91711d\"" Dec 16 13:11:26.845924 kubelet[2366]: E1216 13:11:26.845898 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:26.846102 containerd[1546]: time="2025-12-16T13:11:26.846083554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-197-230,Uid:e02ea800c29c47f03c2f6a231e090398,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9d52420e49ebaad5c5806632059f6f1839c28481bc349377d93703a9a6eb3ed\"" Dec 16 13:11:26.847177 kubelet[2366]: E1216 13:11:26.847161 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:26.849090 containerd[1546]: time="2025-12-16T13:11:26.848982710Z" level=info msg="CreateContainer within sandbox \"cffd9a47099be41e3c70b94f4143a82aa445c7249dd7e5c4ff7004c19f5919b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:11:26.850069 containerd[1546]: time="2025-12-16T13:11:26.850049192Z" level=info msg="CreateContainer within sandbox \"88cb53c08628a17db72a087cd80915b98afc3f15b02257f3960884ed9c91711d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:11:26.850983 containerd[1546]: time="2025-12-16T13:11:26.850964784Z" level=info msg="CreateContainer within sandbox \"b9d52420e49ebaad5c5806632059f6f1839c28481bc349377d93703a9a6eb3ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:11:26.857254 containerd[1546]: time="2025-12-16T13:11:26.857235556Z" level=info msg="Container d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:26.858325 kubelet[2366]: E1216 13:11:26.858295 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.197.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:11:26.860929 containerd[1546]: time="2025-12-16T13:11:26.860889784Z" level=info msg="Container 367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:26.862876 containerd[1546]: time="2025-12-16T13:11:26.862831237Z" level=info msg="Container 26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:26.863017 containerd[1546]: time="2025-12-16T13:11:26.862941068Z" level=info msg="CreateContainer within sandbox \"cffd9a47099be41e3c70b94f4143a82aa445c7249dd7e5c4ff7004c19f5919b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93\"" Dec 16 13:11:26.863978 containerd[1546]: time="2025-12-16T13:11:26.863959210Z" level=info msg="StartContainer for \"d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93\"" Dec 16 13:11:26.865958 containerd[1546]: time="2025-12-16T13:11:26.865937964Z" level=info msg="connecting to shim d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93" address="unix:///run/containerd/s/15509632fc7d4cddf1e91df2343d21634b87d9464c208d3e68b05b545ae4685d" protocol=ttrpc version=3 Dec 16 13:11:26.866366 containerd[1546]: time="2025-12-16T13:11:26.866335744Z" level=info msg="CreateContainer within sandbox \"88cb53c08628a17db72a087cd80915b98afc3f15b02257f3960884ed9c91711d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1\"" Dec 16 13:11:26.866986 containerd[1546]: time="2025-12-16T13:11:26.866852755Z" level=info msg="StartContainer for \"367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1\"" Dec 16 13:11:26.868566 containerd[1546]: time="2025-12-16T13:11:26.868276778Z" level=info msg="connecting to shim 367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1" address="unix:///run/containerd/s/e556d46b20500ac4d6f9d039f79c2ebe1208d3d53e8c526abc71c33897fb2a41" protocol=ttrpc version=3 Dec 16 13:11:26.869076 containerd[1546]: time="2025-12-16T13:11:26.869057240Z" level=info msg="CreateContainer within sandbox \"b9d52420e49ebaad5c5806632059f6f1839c28481bc349377d93703a9a6eb3ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df\"" Dec 16 13:11:26.871080 containerd[1546]: time="2025-12-16T13:11:26.871062654Z" level=info msg="StartContainer for \"26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df\"" Dec 16 13:11:26.872928 containerd[1546]: time="2025-12-16T13:11:26.872909268Z" level=info msg="connecting to shim 26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df" address="unix:///run/containerd/s/e6f4f0646957a5f343a90788cd078963233b01c2f8d312eaf2e003903ab95eab" protocol=ttrpc version=3 Dec 16 13:11:26.893088 systemd[1]: Started cri-containerd-367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1.scope - libcontainer container 367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1. Dec 16 13:11:26.894950 systemd[1]: Started cri-containerd-d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93.scope - libcontainer container d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93. Dec 16 13:11:26.901167 systemd[1]: Started cri-containerd-26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df.scope - libcontainer container 26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df. Dec 16 13:11:26.991484 containerd[1546]: time="2025-12-16T13:11:26.990386432Z" level=info msg="StartContainer for \"d059972249aa395167fa11dd56651948691af01eb02b9ffabacf88737d44fd93\" returns successfully" Dec 16 13:11:27.002546 containerd[1546]: time="2025-12-16T13:11:27.002515997Z" level=info msg="StartContainer for \"367b51a3208e8f2fbb7cbae55bc035f064957013306a1fdc29548d8192cb52f1\" returns successfully" Dec 16 13:11:27.008613 containerd[1546]: time="2025-12-16T13:11:27.008587759Z" level=info msg="StartContainer for \"26347cb33bd31b59c3f798defc9336e481eee67a14a42a0b425df720004be8df\" returns successfully" Dec 16 13:11:27.013497 kubelet[2366]: E1216 13:11:27.013464 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-230?timeout=10s\": dial tcp 172.239.197.230:6443: connect: connection refused" interval="1.6s" Dec 16 13:11:27.020561 kubelet[2366]: E1216 13:11:27.020539 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.197.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:11:27.036746 kubelet[2366]: E1216 13:11:27.036669 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.197.230:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-197-230&limit=500&resourceVersion=0\": dial tcp 172.239.197.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:11:27.198369 kubelet[2366]: I1216 13:11:27.198332 2366 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-230" Dec 16 13:11:27.652675 kubelet[2366]: E1216 13:11:27.652447 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:27.652675 kubelet[2366]: E1216 13:11:27.652576 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:27.660573 kubelet[2366]: E1216 13:11:27.660209 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:27.660573 kubelet[2366]: E1216 13:11:27.660307 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:27.663142 kubelet[2366]: E1216 13:11:27.663126 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:27.664234 kubelet[2366]: E1216 13:11:27.664221 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:28.493530 kubelet[2366]: I1216 13:11:28.493482 2366 kubelet_node_status.go:78] "Successfully registered node" node="172-239-197-230" Dec 16 13:11:28.494206 kubelet[2366]: E1216 13:11:28.494044 2366 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-239-197-230\": node \"172-239-197-230\" not found" Dec 16 13:11:28.556350 kubelet[2366]: E1216 13:11:28.556316 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:28.656588 kubelet[2366]: E1216 13:11:28.656534 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:28.667332 kubelet[2366]: E1216 13:11:28.667271 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:28.667915 kubelet[2366]: E1216 13:11:28.667447 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:28.667915 kubelet[2366]: E1216 13:11:28.667772 2366 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-230\" not found" node="172-239-197-230" Dec 16 13:11:28.668137 kubelet[2366]: E1216 13:11:28.667852 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:28.758094 kubelet[2366]: E1216 13:11:28.757633 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:28.858380 kubelet[2366]: E1216 13:11:28.858321 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:28.959016 kubelet[2366]: E1216 13:11:28.958969 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:29.059648 kubelet[2366]: E1216 13:11:29.059251 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:29.160335 kubelet[2366]: E1216 13:11:29.160287 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:29.208730 kubelet[2366]: I1216 13:11:29.208516 2366 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:29.215872 kubelet[2366]: E1216 13:11:29.215047 2366 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-197-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:29.216084 kubelet[2366]: I1216 13:11:29.215072 2366 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:29.218174 kubelet[2366]: E1216 13:11:29.218137 2366 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-197-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:29.218332 kubelet[2366]: I1216 13:11:29.218264 2366 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:29.221493 kubelet[2366]: E1216 13:11:29.221473 2366 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-197-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:29.590265 kubelet[2366]: I1216 13:11:29.589850 2366 apiserver.go:52] "Watching apiserver" Dec 16 13:11:29.608957 kubelet[2366]: I1216 13:11:29.608933 2366 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:11:30.153876 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:11:30.449291 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-7.scope)... Dec 16 13:11:30.449315 systemd[1]: Reloading... Dec 16 13:11:30.573010 zram_generator::config[2698]: No configuration found. Dec 16 13:11:30.793929 systemd[1]: Reloading finished in 344 ms. Dec 16 13:11:30.824607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:30.840336 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:11:30.840637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:30.840677 systemd[1]: kubelet.service: Consumed 996ms CPU time, 124.9M memory peak. Dec 16 13:11:30.843474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:11:31.027763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:11:31.037212 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:11:31.076663 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:11:31.076663 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:11:31.076663 kubelet[2747]: I1216 13:11:31.076622 2747 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:11:31.083831 kubelet[2747]: I1216 13:11:31.083796 2747 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:11:31.084925 kubelet[2747]: I1216 13:11:31.083926 2747 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:11:31.084925 kubelet[2747]: I1216 13:11:31.083960 2747 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:11:31.084925 kubelet[2747]: I1216 13:11:31.083985 2747 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:11:31.084925 kubelet[2747]: I1216 13:11:31.084201 2747 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:11:31.085315 kubelet[2747]: I1216 13:11:31.085289 2747 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:11:31.087579 kubelet[2747]: I1216 13:11:31.087252 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:11:31.090408 kubelet[2747]: I1216 13:11:31.090388 2747 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:11:31.096455 kubelet[2747]: I1216 13:11:31.096423 2747 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:11:31.096667 kubelet[2747]: I1216 13:11:31.096634 2747 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:11:31.096889 kubelet[2747]: I1216 13:11:31.096657 2747 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-197-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:11:31.096889 kubelet[2747]: I1216 13:11:31.096880 2747 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:11:31.096889 kubelet[2747]: I1216 13:11:31.096891 2747 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:11:31.097221 kubelet[2747]: I1216 13:11:31.096915 2747 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:11:31.097744 kubelet[2747]: I1216 13:11:31.097714 2747 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:31.099096 kubelet[2747]: I1216 13:11:31.097852 2747 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:11:31.099096 kubelet[2747]: I1216 13:11:31.098254 2747 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:11:31.099096 kubelet[2747]: I1216 13:11:31.098283 2747 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:11:31.099096 kubelet[2747]: I1216 13:11:31.098302 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:11:31.103075 kubelet[2747]: I1216 13:11:31.102976 2747 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:11:31.104245 kubelet[2747]: I1216 13:11:31.103599 2747 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:11:31.104485 kubelet[2747]: I1216 13:11:31.104392 2747 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:11:31.108977 kubelet[2747]: I1216 13:11:31.108963 2747 server.go:1262] "Started kubelet" Dec 16 13:11:31.109626 kubelet[2747]: I1216 13:11:31.109518 2747 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:11:31.110301 kubelet[2747]: I1216 13:11:31.110063 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:11:31.111416 kubelet[2747]: I1216 13:11:31.111402 2747 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:11:31.121830 kubelet[2747]: I1216 13:11:31.121801 2747 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:11:31.122578 kubelet[2747]: I1216 13:11:31.122529 2747 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:11:31.123495 kubelet[2747]: I1216 13:11:31.123474 2747 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:11:31.123563 kubelet[2747]: I1216 13:11:31.123551 2747 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:11:31.124526 kubelet[2747]: I1216 13:11:31.124470 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:11:31.125555 kubelet[2747]: I1216 13:11:31.125533 2747 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:11:31.125690 kubelet[2747]: E1216 13:11:31.125660 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-197-230\" not found" Dec 16 13:11:31.126542 kubelet[2747]: I1216 13:11:31.126507 2747 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:11:31.126787 kubelet[2747]: I1216 13:11:31.126750 2747 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:11:31.133572 kubelet[2747]: I1216 13:11:31.133296 2747 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:11:31.133572 kubelet[2747]: I1216 13:11:31.133388 2747 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:11:31.134744 kubelet[2747]: E1216 13:11:31.134728 2747 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:11:31.137590 kubelet[2747]: I1216 13:11:31.137575 2747 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:11:31.144531 kubelet[2747]: I1216 13:11:31.144501 2747 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:11:31.144531 kubelet[2747]: I1216 13:11:31.144525 2747 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:11:31.144624 kubelet[2747]: I1216 13:11:31.144546 2747 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:11:31.144624 kubelet[2747]: E1216 13:11:31.144587 2747 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:11:31.181082 kubelet[2747]: I1216 13:11:31.180759 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:11:31.181082 kubelet[2747]: I1216 13:11:31.181036 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:11:31.181082 kubelet[2747]: I1216 13:11:31.181056 2747 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:11:31.181229 kubelet[2747]: I1216 13:11:31.181163 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:11:31.181229 kubelet[2747]: I1216 13:11:31.181173 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:11:31.181229 kubelet[2747]: I1216 13:11:31.181188 2747 policy_none.go:49] "None policy: Start" Dec 16 13:11:31.181229 kubelet[2747]: I1216 13:11:31.181197 2747 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:11:31.181229 kubelet[2747]: I1216 13:11:31.181207 2747 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:11:31.181323 kubelet[2747]: I1216 13:11:31.181283 2747 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:11:31.181323 kubelet[2747]: I1216 13:11:31.181291 2747 policy_none.go:47] "Start" Dec 16 13:11:31.188183 kubelet[2747]: E1216 13:11:31.188113 2747 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:11:31.188392 kubelet[2747]: I1216 13:11:31.188369 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:11:31.188492 kubelet[2747]: I1216 13:11:31.188453 2747 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:11:31.189327 kubelet[2747]: I1216 13:11:31.189139 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:11:31.189918 kubelet[2747]: E1216 13:11:31.189891 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:11:31.246277 kubelet[2747]: I1216 13:11:31.245743 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:31.246277 kubelet[2747]: I1216 13:11:31.245820 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:31.246277 kubelet[2747]: I1216 13:11:31.246204 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:31.291709 kubelet[2747]: I1216 13:11:31.291671 2747 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-230" Dec 16 13:11:31.298438 kubelet[2747]: I1216 13:11:31.298396 2747 kubelet_node_status.go:124] "Node was previously registered" node="172-239-197-230" Dec 16 13:11:31.298493 kubelet[2747]: I1216 13:11:31.298470 2747 kubelet_node_status.go:78] "Successfully registered node" node="172-239-197-230" Dec 16 13:11:31.428076 kubelet[2747]: I1216 13:11:31.428013 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-kubeconfig\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:31.428076 kubelet[2747]: I1216 13:11:31.428073 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:31.428327 kubelet[2747]: I1216 13:11:31.428111 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab340b9a9b40ee7999202d0401cb8fd4-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-197-230\" (UID: \"ab340b9a9b40ee7999202d0401cb8fd4\") " pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:31.428327 kubelet[2747]: I1216 13:11:31.428129 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-ca-certs\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:31.428327 kubelet[2747]: I1216 13:11:31.428147 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-flexvolume-dir\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:31.428327 kubelet[2747]: I1216 13:11:31.428161 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0081f2d6a121b8f958f066ecf2424673-k8s-certs\") pod \"kube-controller-manager-172-239-197-230\" (UID: \"0081f2d6a121b8f958f066ecf2424673\") " pod="kube-system/kube-controller-manager-172-239-197-230" Dec 16 13:11:31.428327 kubelet[2747]: I1216 13:11:31.428177 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e02ea800c29c47f03c2f6a231e090398-kubeconfig\") pod \"kube-scheduler-172-239-197-230\" (UID: \"e02ea800c29c47f03c2f6a231e090398\") " pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:31.428454 kubelet[2747]: I1216 13:11:31.428192 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab340b9a9b40ee7999202d0401cb8fd4-ca-certs\") pod \"kube-apiserver-172-239-197-230\" (UID: \"ab340b9a9b40ee7999202d0401cb8fd4\") " pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:31.428454 kubelet[2747]: I1216 13:11:31.428209 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab340b9a9b40ee7999202d0401cb8fd4-k8s-certs\") pod \"kube-apiserver-172-239-197-230\" (UID: \"ab340b9a9b40ee7999202d0401cb8fd4\") " pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:31.458398 sudo[2785]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:11:31.458764 sudo[2785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:11:31.553197 kubelet[2747]: E1216 13:11:31.553148 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:31.553398 kubelet[2747]: E1216 13:11:31.553289 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:31.553604 kubelet[2747]: E1216 13:11:31.553575 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:31.772575 sudo[2785]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:32.103000 kubelet[2747]: I1216 13:11:32.102037 2747 apiserver.go:52] "Watching apiserver" Dec 16 13:11:32.127651 kubelet[2747]: I1216 13:11:32.127626 2747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:11:32.163997 kubelet[2747]: I1216 13:11:32.163971 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:32.164404 kubelet[2747]: E1216 13:11:32.164090 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:32.166310 kubelet[2747]: I1216 13:11:32.166144 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:32.173632 kubelet[2747]: E1216 13:11:32.173616 2747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-197-230\" already exists" pod="kube-system/kube-apiserver-172-239-197-230" Dec 16 13:11:32.173952 kubelet[2747]: E1216 13:11:32.173937 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:32.175314 kubelet[2747]: E1216 13:11:32.175293 2747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-197-230\" already exists" pod="kube-system/kube-scheduler-172-239-197-230" Dec 16 13:11:32.175439 kubelet[2747]: E1216 13:11:32.175418 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:32.193394 kubelet[2747]: I1216 13:11:32.193341 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-197-230" podStartSLOduration=1.193329866 podStartE2EDuration="1.193329866s" podCreationTimestamp="2025-12-16 13:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:32.192635425 +0000 UTC m=+1.150889613" watchObservedRunningTime="2025-12-16 13:11:32.193329866 +0000 UTC m=+1.151584054" Dec 16 13:11:32.205919 kubelet[2747]: I1216 13:11:32.205777 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-197-230" podStartSLOduration=1.205766461 podStartE2EDuration="1.205766461s" podCreationTimestamp="2025-12-16 13:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:32.200923771 +0000 UTC m=+1.159177959" watchObservedRunningTime="2025-12-16 13:11:32.205766461 +0000 UTC m=+1.164020649" Dec 16 13:11:33.085623 sudo[1793]: pam_unix(sudo:session): session closed for user root Dec 16 13:11:33.139883 sshd[1792]: Connection closed by 139.178.89.65 port 36272 Dec 16 13:11:33.140665 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Dec 16 13:11:33.146265 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:11:33.146417 systemd[1]: sshd@6-172.239.197.230:22-139.178.89.65:36272.service: Deactivated successfully. Dec 16 13:11:33.150701 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:11:33.151232 systemd[1]: session-7.scope: Consumed 5.582s CPU time, 271.4M memory peak. Dec 16 13:11:33.154256 systemd-logind[1526]: Removed session 7. Dec 16 13:11:33.165168 kubelet[2747]: E1216 13:11:33.165099 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:33.165701 kubelet[2747]: E1216 13:11:33.165674 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:37.946037 kubelet[2747]: I1216 13:11:37.945973 2747 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:11:37.948399 kubelet[2747]: I1216 13:11:37.947245 2747 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:11:37.948436 containerd[1546]: time="2025-12-16T13:11:37.947048837Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:11:38.695549 kubelet[2747]: I1216 13:11:38.695485 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-197-230" podStartSLOduration=7.695469734 podStartE2EDuration="7.695469734s" podCreationTimestamp="2025-12-16 13:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:32.205932641 +0000 UTC m=+1.164186829" watchObservedRunningTime="2025-12-16 13:11:38.695469734 +0000 UTC m=+7.653723922" Dec 16 13:11:38.714719 systemd[1]: Created slice kubepods-besteffort-podb485c970_9cef_4585_b9bc_0ac85ee43d31.slice - libcontainer container kubepods-besteffort-podb485c970_9cef_4585_b9bc_0ac85ee43d31.slice. Dec 16 13:11:38.729847 systemd[1]: Created slice kubepods-burstable-podbfefc91b_949b_4810_ba60_3e2102621050.slice - libcontainer container kubepods-burstable-podbfefc91b_949b_4810_ba60_3e2102621050.slice. Dec 16 13:11:38.777519 kubelet[2747]: I1216 13:11:38.777444 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-bpf-maps\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.777519 kubelet[2747]: I1216 13:11:38.777512 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-cgroup\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.777905 kubelet[2747]: I1216 13:11:38.777533 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-hubble-tls\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.777905 kubelet[2747]: I1216 13:11:38.777579 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b485c970-9cef-4585-b9bc-0ac85ee43d31-lib-modules\") pod \"kube-proxy-qxg5v\" (UID: \"b485c970-9cef-4585-b9bc-0ac85ee43d31\") " pod="kube-system/kube-proxy-qxg5v" Dec 16 13:11:38.777905 kubelet[2747]: I1216 13:11:38.777595 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-hostproc\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.777905 kubelet[2747]: I1216 13:11:38.777609 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-lib-modules\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.777905 kubelet[2747]: I1216 13:11:38.777622 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b485c970-9cef-4585-b9bc-0ac85ee43d31-xtables-lock\") pod \"kube-proxy-qxg5v\" (UID: \"b485c970-9cef-4585-b9bc-0ac85ee43d31\") " pod="kube-system/kube-proxy-qxg5v" Dec 16 13:11:38.777905 kubelet[2747]: I1216 13:11:38.777676 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cni-path\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778049 kubelet[2747]: I1216 13:11:38.777742 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-etc-cni-netd\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778049 kubelet[2747]: I1216 13:11:38.777778 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-xtables-lock\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778049 kubelet[2747]: I1216 13:11:38.777825 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfefc91b-949b-4810-ba60-3e2102621050-clustermesh-secrets\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778049 kubelet[2747]: I1216 13:11:38.777901 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-kernel\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778049 kubelet[2747]: I1216 13:11:38.777948 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-run\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778049 kubelet[2747]: I1216 13:11:38.777986 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfefc91b-949b-4810-ba60-3e2102621050-cilium-config-path\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778176 kubelet[2747]: I1216 13:11:38.778010 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-net\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778176 kubelet[2747]: I1216 13:11:38.778028 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvl9\" (UniqueName: \"kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-kube-api-access-fxvl9\") pod \"cilium-8zgqm\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " pod="kube-system/cilium-8zgqm" Dec 16 13:11:38.778176 kubelet[2747]: I1216 13:11:38.778060 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b485c970-9cef-4585-b9bc-0ac85ee43d31-kube-proxy\") pod \"kube-proxy-qxg5v\" (UID: \"b485c970-9cef-4585-b9bc-0ac85ee43d31\") " pod="kube-system/kube-proxy-qxg5v" Dec 16 13:11:38.778176 kubelet[2747]: I1216 13:11:38.778078 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlhd\" (UniqueName: \"kubernetes.io/projected/b485c970-9cef-4585-b9bc-0ac85ee43d31-kube-api-access-kxlhd\") pod \"kube-proxy-qxg5v\" (UID: \"b485c970-9cef-4585-b9bc-0ac85ee43d31\") " pod="kube-system/kube-proxy-qxg5v" Dec 16 13:11:38.900633 kubelet[2747]: E1216 13:11:38.897829 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.028505 kubelet[2747]: E1216 13:11:39.028361 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.031344 containerd[1546]: time="2025-12-16T13:11:39.030304466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxg5v,Uid:b485c970-9cef-4585-b9bc-0ac85ee43d31,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:39.036530 kubelet[2747]: E1216 13:11:39.036498 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.037787 containerd[1546]: time="2025-12-16T13:11:39.037730150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8zgqm,Uid:bfefc91b-949b-4810-ba60-3e2102621050,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:39.067415 containerd[1546]: time="2025-12-16T13:11:39.067024541Z" level=info msg="connecting to shim 67ed0ce78f70f8d781eaf64804e60d3efa237bf5f0d829e2278655e4e4a713d1" address="unix:///run/containerd/s/7f699798f9cd6faa8d731857ca911fd417a12c31efd5df87dd62f79fb0cc674b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:39.076695 containerd[1546]: time="2025-12-16T13:11:39.076649326Z" level=info msg="connecting to shim df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7" address="unix:///run/containerd/s/51bb4dede53962bc20ae1dba717f3c6d58c6207693e98fec0ab4dc0ea6d73db2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:39.118773 systemd[1]: Created slice kubepods-besteffort-pode43f38f7_2e85_4778_a7b4_285e063b1084.slice - libcontainer container kubepods-besteffort-pode43f38f7_2e85_4778_a7b4_285e063b1084.slice. Dec 16 13:11:39.155183 systemd[1]: Started cri-containerd-df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7.scope - libcontainer container df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7. Dec 16 13:11:39.164074 systemd[1]: Started cri-containerd-67ed0ce78f70f8d781eaf64804e60d3efa237bf5f0d829e2278655e4e4a713d1.scope - libcontainer container 67ed0ce78f70f8d781eaf64804e60d3efa237bf5f0d829e2278655e4e4a713d1. Dec 16 13:11:39.182352 kubelet[2747]: E1216 13:11:39.181785 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.184467 kubelet[2747]: I1216 13:11:39.182803 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6q2\" (UniqueName: \"kubernetes.io/projected/e43f38f7-2e85-4778-a7b4-285e063b1084-kube-api-access-sb6q2\") pod \"cilium-operator-6f9c7c5859-zvvht\" (UID: \"e43f38f7-2e85-4778-a7b4-285e063b1084\") " pod="kube-system/cilium-operator-6f9c7c5859-zvvht" Dec 16 13:11:39.185840 kubelet[2747]: I1216 13:11:39.185110 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e43f38f7-2e85-4778-a7b4-285e063b1084-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-zvvht\" (UID: \"e43f38f7-2e85-4778-a7b4-285e063b1084\") " pod="kube-system/cilium-operator-6f9c7c5859-zvvht" Dec 16 13:11:39.228998 containerd[1546]: time="2025-12-16T13:11:39.228802952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxg5v,Uid:b485c970-9cef-4585-b9bc-0ac85ee43d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"67ed0ce78f70f8d781eaf64804e60d3efa237bf5f0d829e2278655e4e4a713d1\"" Dec 16 13:11:39.229670 kubelet[2747]: E1216 13:11:39.229623 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.234226 containerd[1546]: time="2025-12-16T13:11:39.234201970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8zgqm,Uid:bfefc91b-949b-4810-ba60-3e2102621050,Namespace:kube-system,Attempt:0,} returns sandbox id \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\"" Dec 16 13:11:39.236133 kubelet[2747]: E1216 13:11:39.236107 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.237429 containerd[1546]: time="2025-12-16T13:11:39.237355166Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:11:39.239391 containerd[1546]: time="2025-12-16T13:11:39.239356432Z" level=info msg="CreateContainer within sandbox \"67ed0ce78f70f8d781eaf64804e60d3efa237bf5f0d829e2278655e4e4a713d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:11:39.247138 containerd[1546]: time="2025-12-16T13:11:39.247111050Z" level=info msg="Container 712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:39.252099 containerd[1546]: time="2025-12-16T13:11:39.252052956Z" level=info msg="CreateContainer within sandbox \"67ed0ce78f70f8d781eaf64804e60d3efa237bf5f0d829e2278655e4e4a713d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9\"" Dec 16 13:11:39.255399 containerd[1546]: time="2025-12-16T13:11:39.255330529Z" level=info msg="StartContainer for \"712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9\"" Dec 16 13:11:39.257462 containerd[1546]: time="2025-12-16T13:11:39.257428513Z" level=info msg="connecting to shim 712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9" address="unix:///run/containerd/s/7f699798f9cd6faa8d731857ca911fd417a12c31efd5df87dd62f79fb0cc674b" protocol=ttrpc version=3 Dec 16 13:11:39.281022 systemd[1]: Started cri-containerd-712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9.scope - libcontainer container 712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9. Dec 16 13:11:39.363291 containerd[1546]: time="2025-12-16T13:11:39.363235520Z" level=info msg="StartContainer for \"712ca6e1af2e896264dbb3a7a07023f5290ff392c0ed07f2afd808fd232755d9\" returns successfully" Dec 16 13:11:39.430822 kubelet[2747]: E1216 13:11:39.430185 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.433916 containerd[1546]: time="2025-12-16T13:11:39.433131518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zvvht,Uid:e43f38f7-2e85-4778-a7b4-285e063b1084,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:39.448044 containerd[1546]: time="2025-12-16T13:11:39.447986564Z" level=info msg="connecting to shim ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4" address="unix:///run/containerd/s/2b0720001e3e15eae416bb13c46c153ce5aadb2e215a8a3592437f41f5136fad" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:39.489026 systemd[1]: Started cri-containerd-ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4.scope - libcontainer container ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4. Dec 16 13:11:39.541600 containerd[1546]: time="2025-12-16T13:11:39.541510509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zvvht,Uid:e43f38f7-2e85-4778-a7b4-285e063b1084,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\"" Dec 16 13:11:39.542901 kubelet[2747]: E1216 13:11:39.542390 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:39.833317 kubelet[2747]: E1216 13:11:39.829242 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:40.189450 kubelet[2747]: E1216 13:11:40.189354 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:40.189941 kubelet[2747]: E1216 13:11:40.189807 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:40.204707 kubelet[2747]: I1216 13:11:40.204556 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qxg5v" podStartSLOduration=2.204539877 podStartE2EDuration="2.204539877s" podCreationTimestamp="2025-12-16 13:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:40.20438767 +0000 UTC m=+9.162641858" watchObservedRunningTime="2025-12-16 13:11:40.204539877 +0000 UTC m=+9.162794065" Dec 16 13:11:41.903123 kubelet[2747]: E1216 13:11:41.903000 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:42.524319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674904274.mount: Deactivated successfully. Dec 16 13:11:44.230168 containerd[1546]: time="2025-12-16T13:11:44.230108725Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:44.232105 containerd[1546]: time="2025-12-16T13:11:44.231881884Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:11:44.232547 containerd[1546]: time="2025-12-16T13:11:44.232520416Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:44.234340 containerd[1546]: time="2025-12-16T13:11:44.234303055Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.99689278s" Dec 16 13:11:44.234340 containerd[1546]: time="2025-12-16T13:11:44.234335255Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:11:44.236098 containerd[1546]: time="2025-12-16T13:11:44.235707419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:11:44.243554 containerd[1546]: time="2025-12-16T13:11:44.243487196Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:11:44.249906 containerd[1546]: time="2025-12-16T13:11:44.249547335Z" level=info msg="Container f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:44.257564 containerd[1546]: time="2025-12-16T13:11:44.257526661Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\"" Dec 16 13:11:44.259644 containerd[1546]: time="2025-12-16T13:11:44.259612117Z" level=info msg="StartContainer for \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\"" Dec 16 13:11:44.260954 containerd[1546]: time="2025-12-16T13:11:44.260720574Z" level=info msg="connecting to shim f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa" address="unix:///run/containerd/s/51bb4dede53962bc20ae1dba717f3c6d58c6207693e98fec0ab4dc0ea6d73db2" protocol=ttrpc version=3 Dec 16 13:11:44.291008 systemd[1]: Started cri-containerd-f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa.scope - libcontainer container f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa. Dec 16 13:11:44.331482 containerd[1546]: time="2025-12-16T13:11:44.331411879Z" level=info msg="StartContainer for \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\" returns successfully" Dec 16 13:11:44.344426 systemd[1]: cri-containerd-f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa.scope: Deactivated successfully. Dec 16 13:11:44.346309 containerd[1546]: time="2025-12-16T13:11:44.346242345Z" level=info msg="received container exit event container_id:\"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\" id:\"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\" pid:3168 exited_at:{seconds:1765890704 nanos:345924928}" Dec 16 13:11:44.382949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa-rootfs.mount: Deactivated successfully. Dec 16 13:11:45.129527 update_engine[1529]: I20251216 13:11:45.128911 1529 update_attempter.cc:509] Updating boot flags... Dec 16 13:11:45.206547 kubelet[2747]: E1216 13:11:45.204799 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:45.223904 containerd[1546]: time="2025-12-16T13:11:45.223556487Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:11:45.240101 containerd[1546]: time="2025-12-16T13:11:45.240060546Z" level=info msg="Container b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:45.257123 containerd[1546]: time="2025-12-16T13:11:45.254169972Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\"" Dec 16 13:11:45.259944 containerd[1546]: time="2025-12-16T13:11:45.259921169Z" level=info msg="StartContainer for \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\"" Dec 16 13:11:45.262384 containerd[1546]: time="2025-12-16T13:11:45.262360763Z" level=info msg="connecting to shim b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c" address="unix:///run/containerd/s/51bb4dede53962bc20ae1dba717f3c6d58c6207693e98fec0ab4dc0ea6d73db2" protocol=ttrpc version=3 Dec 16 13:11:45.368990 systemd[1]: Started cri-containerd-b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c.scope - libcontainer container b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c. Dec 16 13:11:45.510422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:11:45.510630 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:11:45.511912 containerd[1546]: time="2025-12-16T13:11:45.511794714Z" level=info msg="StartContainer for \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\" returns successfully" Dec 16 13:11:45.512920 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:11:45.519012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:11:45.522990 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:11:45.525307 systemd[1]: cri-containerd-b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c.scope: Deactivated successfully. Dec 16 13:11:45.529412 containerd[1546]: time="2025-12-16T13:11:45.529385203Z" level=info msg="received container exit event container_id:\"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\" id:\"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\" pid:3244 exited_at:{seconds:1765890705 nanos:527382895}" Dec 16 13:11:45.587602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:11:45.605542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c-rootfs.mount: Deactivated successfully. Dec 16 13:11:45.824880 containerd[1546]: time="2025-12-16T13:11:45.824743613Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:45.825634 containerd[1546]: time="2025-12-16T13:11:45.825606543Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:11:45.826135 containerd[1546]: time="2025-12-16T13:11:45.826110908Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:45.827642 containerd[1546]: time="2025-12-16T13:11:45.827555132Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.591813824s" Dec 16 13:11:45.827642 containerd[1546]: time="2025-12-16T13:11:45.827588402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:11:45.831962 containerd[1546]: time="2025-12-16T13:11:45.831940345Z" level=info msg="CreateContainer within sandbox \"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:11:45.838471 containerd[1546]: time="2025-12-16T13:11:45.838451563Z" level=info msg="Container 9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:45.842671 containerd[1546]: time="2025-12-16T13:11:45.842648867Z" level=info msg="CreateContainer within sandbox \"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\"" Dec 16 13:11:45.843302 containerd[1546]: time="2025-12-16T13:11:45.843269870Z" level=info msg="StartContainer for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\"" Dec 16 13:11:45.844629 containerd[1546]: time="2025-12-16T13:11:45.844604236Z" level=info msg="connecting to shim 9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0" address="unix:///run/containerd/s/2b0720001e3e15eae416bb13c46c153ce5aadb2e215a8a3592437f41f5136fad" protocol=ttrpc version=3 Dec 16 13:11:45.864067 systemd[1]: Started cri-containerd-9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0.scope - libcontainer container 9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0. Dec 16 13:11:45.902106 containerd[1546]: time="2025-12-16T13:11:45.901943799Z" level=info msg="StartContainer for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" returns successfully" Dec 16 13:11:46.210531 kubelet[2747]: E1216 13:11:46.210491 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:46.217900 kubelet[2747]: E1216 13:11:46.215798 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:46.223580 containerd[1546]: time="2025-12-16T13:11:46.223513583Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:11:46.240245 containerd[1546]: time="2025-12-16T13:11:46.239421011Z" level=info msg="Container 4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:46.252994 containerd[1546]: time="2025-12-16T13:11:46.252952044Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\"" Dec 16 13:11:46.253344 containerd[1546]: time="2025-12-16T13:11:46.253319401Z" level=info msg="StartContainer for \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\"" Dec 16 13:11:46.258453 containerd[1546]: time="2025-12-16T13:11:46.258417489Z" level=info msg="connecting to shim 4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c" address="unix:///run/containerd/s/51bb4dede53962bc20ae1dba717f3c6d58c6207693e98fec0ab4dc0ea6d73db2" protocol=ttrpc version=3 Dec 16 13:11:46.298979 systemd[1]: Started cri-containerd-4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c.scope - libcontainer container 4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c. Dec 16 13:11:46.313266 kubelet[2747]: I1216 13:11:46.313215 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-zvvht" podStartSLOduration=1.028701199 podStartE2EDuration="7.313182415s" podCreationTimestamp="2025-12-16 13:11:39 +0000 UTC" firstStartedPulling="2025-12-16 13:11:39.544388741 +0000 UTC m=+8.502642929" lastFinishedPulling="2025-12-16 13:11:45.828869957 +0000 UTC m=+14.787124145" observedRunningTime="2025-12-16 13:11:46.256608838 +0000 UTC m=+15.214863026" watchObservedRunningTime="2025-12-16 13:11:46.313182415 +0000 UTC m=+15.271436603" Dec 16 13:11:46.463507 containerd[1546]: time="2025-12-16T13:11:46.463185806Z" level=info msg="StartContainer for \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\" returns successfully" Dec 16 13:11:46.469965 systemd[1]: cri-containerd-4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c.scope: Deactivated successfully. Dec 16 13:11:46.474110 containerd[1546]: time="2025-12-16T13:11:46.474023935Z" level=info msg="received container exit event container_id:\"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\" id:\"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\" pid:3335 exited_at:{seconds:1765890706 nanos:473588681}" Dec 16 13:11:46.518296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c-rootfs.mount: Deactivated successfully. Dec 16 13:11:47.225086 kubelet[2747]: E1216 13:11:47.222285 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:47.225086 kubelet[2747]: E1216 13:11:47.222520 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:47.235896 containerd[1546]: time="2025-12-16T13:11:47.235155776Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:11:47.249127 containerd[1546]: time="2025-12-16T13:11:47.249087085Z" level=info msg="Container b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:47.257563 containerd[1546]: time="2025-12-16T13:11:47.257452127Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\"" Dec 16 13:11:47.258826 containerd[1546]: time="2025-12-16T13:11:47.258604526Z" level=info msg="StartContainer for \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\"" Dec 16 13:11:47.261324 containerd[1546]: time="2025-12-16T13:11:47.261279772Z" level=info msg="connecting to shim b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d" address="unix:///run/containerd/s/51bb4dede53962bc20ae1dba717f3c6d58c6207693e98fec0ab4dc0ea6d73db2" protocol=ttrpc version=3 Dec 16 13:11:47.285016 systemd[1]: Started cri-containerd-b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d.scope - libcontainer container b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d. Dec 16 13:11:47.332404 containerd[1546]: time="2025-12-16T13:11:47.332355656Z" level=info msg="StartContainer for \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\" returns successfully" Dec 16 13:11:47.334106 systemd[1]: cri-containerd-b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d.scope: Deactivated successfully. Dec 16 13:11:47.340104 containerd[1546]: time="2025-12-16T13:11:47.340070693Z" level=info msg="received container exit event container_id:\"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\" id:\"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\" pid:3374 exited_at:{seconds:1765890707 nanos:339354600}" Dec 16 13:11:47.365510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d-rootfs.mount: Deactivated successfully. Dec 16 13:11:48.239899 kubelet[2747]: E1216 13:11:48.238421 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:48.247190 containerd[1546]: time="2025-12-16T13:11:48.247060221Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:11:48.272331 containerd[1546]: time="2025-12-16T13:11:48.269948464Z" level=info msg="Container cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:48.272357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983369932.mount: Deactivated successfully. Dec 16 13:11:48.285349 containerd[1546]: time="2025-12-16T13:11:48.285307580Z" level=info msg="CreateContainer within sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\"" Dec 16 13:11:48.286064 containerd[1546]: time="2025-12-16T13:11:48.285907424Z" level=info msg="StartContainer for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\"" Dec 16 13:11:48.287334 containerd[1546]: time="2025-12-16T13:11:48.287279943Z" level=info msg="connecting to shim cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06" address="unix:///run/containerd/s/51bb4dede53962bc20ae1dba717f3c6d58c6207693e98fec0ab4dc0ea6d73db2" protocol=ttrpc version=3 Dec 16 13:11:48.324005 systemd[1]: Started cri-containerd-cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06.scope - libcontainer container cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06. Dec 16 13:11:48.405787 containerd[1546]: time="2025-12-16T13:11:48.405669578Z" level=info msg="StartContainer for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" returns successfully" Dec 16 13:11:48.590969 kubelet[2747]: I1216 13:11:48.588535 2747 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:11:48.642962 systemd[1]: Created slice kubepods-burstable-pode5c43900_0ffa_446f_9fb3_23d0d46dc44a.slice - libcontainer container kubepods-burstable-pode5c43900_0ffa_446f_9fb3_23d0d46dc44a.slice. Dec 16 13:11:48.660009 systemd[1]: Created slice kubepods-burstable-pod4924ce6a_f7fb_425d_bf42_ae1ee5baba19.slice - libcontainer container kubepods-burstable-pod4924ce6a_f7fb_425d_bf42_ae1ee5baba19.slice. Dec 16 13:11:48.667335 kubelet[2747]: I1216 13:11:48.667311 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7nm\" (UniqueName: \"kubernetes.io/projected/4924ce6a-f7fb-425d-bf42-ae1ee5baba19-kube-api-access-bg7nm\") pod \"coredns-66bc5c9577-zvbsr\" (UID: \"4924ce6a-f7fb-425d-bf42-ae1ee5baba19\") " pod="kube-system/coredns-66bc5c9577-zvbsr" Dec 16 13:11:48.667570 kubelet[2747]: I1216 13:11:48.667557 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpss\" (UniqueName: \"kubernetes.io/projected/e5c43900-0ffa-446f-9fb3-23d0d46dc44a-kube-api-access-wmpss\") pod \"coredns-66bc5c9577-drfsd\" (UID: \"e5c43900-0ffa-446f-9fb3-23d0d46dc44a\") " pod="kube-system/coredns-66bc5c9577-drfsd" Dec 16 13:11:48.668983 kubelet[2747]: I1216 13:11:48.668965 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4924ce6a-f7fb-425d-bf42-ae1ee5baba19-config-volume\") pod \"coredns-66bc5c9577-zvbsr\" (UID: \"4924ce6a-f7fb-425d-bf42-ae1ee5baba19\") " pod="kube-system/coredns-66bc5c9577-zvbsr" Dec 16 13:11:48.669095 kubelet[2747]: I1216 13:11:48.669083 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5c43900-0ffa-446f-9fb3-23d0d46dc44a-config-volume\") pod \"coredns-66bc5c9577-drfsd\" (UID: \"e5c43900-0ffa-446f-9fb3-23d0d46dc44a\") " pod="kube-system/coredns-66bc5c9577-drfsd" Dec 16 13:11:48.954707 kubelet[2747]: E1216 13:11:48.954669 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:48.956941 containerd[1546]: time="2025-12-16T13:11:48.956399080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-drfsd,Uid:e5c43900-0ffa-446f-9fb3-23d0d46dc44a,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:48.965944 kubelet[2747]: E1216 13:11:48.965688 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:48.966596 containerd[1546]: time="2025-12-16T13:11:48.966548762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zvbsr,Uid:4924ce6a-f7fb-425d-bf42-ae1ee5baba19,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:49.244325 kubelet[2747]: E1216 13:11:49.244181 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:49.257451 kubelet[2747]: I1216 13:11:49.257259 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8zgqm" podStartSLOduration=6.258719108 podStartE2EDuration="11.257242877s" podCreationTimestamp="2025-12-16 13:11:38 +0000 UTC" firstStartedPulling="2025-12-16 13:11:39.236950792 +0000 UTC m=+8.195204980" lastFinishedPulling="2025-12-16 13:11:44.235474541 +0000 UTC m=+13.193728749" observedRunningTime="2025-12-16 13:11:49.25547976 +0000 UTC m=+18.213733968" watchObservedRunningTime="2025-12-16 13:11:49.257242877 +0000 UTC m=+18.215497065" Dec 16 13:11:50.246424 kubelet[2747]: E1216 13:11:50.246381 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:50.772245 systemd-networkd[1436]: cilium_host: Link UP Dec 16 13:11:50.772426 systemd-networkd[1436]: cilium_net: Link UP Dec 16 13:11:50.772625 systemd-networkd[1436]: cilium_host: Gained carrier Dec 16 13:11:50.772816 systemd-networkd[1436]: cilium_net: Gained carrier Dec 16 13:11:50.896952 systemd-networkd[1436]: cilium_vxlan: Link UP Dec 16 13:11:50.896962 systemd-networkd[1436]: cilium_vxlan: Gained carrier Dec 16 13:11:51.114905 kernel: NET: Registered PF_ALG protocol family Dec 16 13:11:51.148472 systemd-networkd[1436]: cilium_host: Gained IPv6LL Dec 16 13:11:51.199940 systemd-networkd[1436]: cilium_net: Gained IPv6LL Dec 16 13:11:51.251301 kubelet[2747]: E1216 13:11:51.251259 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:51.793851 systemd-networkd[1436]: lxc_health: Link UP Dec 16 13:11:51.794588 systemd-networkd[1436]: lxc_health: Gained carrier Dec 16 13:11:52.019051 kernel: eth0: renamed from tmpf9e1f Dec 16 13:11:52.023964 kernel: eth0: renamed from tmp6bcc0 Dec 16 13:11:52.020802 systemd-networkd[1436]: lxc90a6ab88677d: Link UP Dec 16 13:11:52.022499 systemd-networkd[1436]: lxc49b8ea8363be: Link UP Dec 16 13:11:52.022800 systemd-networkd[1436]: lxc90a6ab88677d: Gained carrier Dec 16 13:11:52.030154 systemd-networkd[1436]: lxc49b8ea8363be: Gained carrier Dec 16 13:11:52.920051 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Dec 16 13:11:52.984129 systemd-networkd[1436]: lxc_health: Gained IPv6LL Dec 16 13:11:53.036461 kubelet[2747]: E1216 13:11:53.036425 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:53.176123 systemd-networkd[1436]: lxc49b8ea8363be: Gained IPv6LL Dec 16 13:11:53.560548 systemd-networkd[1436]: lxc90a6ab88677d: Gained IPv6LL Dec 16 13:11:55.219982 containerd[1546]: time="2025-12-16T13:11:55.219919749Z" level=info msg="connecting to shim f9e1f9394ffcb2978dc22a7f531064505202c29c34a6246e9ca4a4d12d235601" address="unix:///run/containerd/s/3a1bc0a32f400dce051828c1f9afc7156de6ee21af99ddaa32a3482e26a5f9aa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:55.233014 containerd[1546]: time="2025-12-16T13:11:55.232821917Z" level=info msg="connecting to shim 6bcc0ca660596bd4155c984ee1b5beb35d5ad84c34e2fd57508045006e172a41" address="unix:///run/containerd/s/9a1f52bd6e9e80e8a4d6e1d6e6fd9acf2088eb3e5c8c3f740ca8be77c79f9097" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:55.263059 systemd[1]: Started cri-containerd-f9e1f9394ffcb2978dc22a7f531064505202c29c34a6246e9ca4a4d12d235601.scope - libcontainer container f9e1f9394ffcb2978dc22a7f531064505202c29c34a6246e9ca4a4d12d235601. Dec 16 13:11:55.279988 systemd[1]: Started cri-containerd-6bcc0ca660596bd4155c984ee1b5beb35d5ad84c34e2fd57508045006e172a41.scope - libcontainer container 6bcc0ca660596bd4155c984ee1b5beb35d5ad84c34e2fd57508045006e172a41. Dec 16 13:11:55.368236 containerd[1546]: time="2025-12-16T13:11:55.368143641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-drfsd,Uid:e5c43900-0ffa-446f-9fb3-23d0d46dc44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bcc0ca660596bd4155c984ee1b5beb35d5ad84c34e2fd57508045006e172a41\"" Dec 16 13:11:55.370147 containerd[1546]: time="2025-12-16T13:11:55.370089781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zvbsr,Uid:4924ce6a-f7fb-425d-bf42-ae1ee5baba19,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9e1f9394ffcb2978dc22a7f531064505202c29c34a6246e9ca4a4d12d235601\"" Dec 16 13:11:55.371067 kubelet[2747]: E1216 13:11:55.370785 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:55.374521 kubelet[2747]: E1216 13:11:55.374460 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:55.379490 containerd[1546]: time="2025-12-16T13:11:55.379345667Z" level=info msg="CreateContainer within sandbox \"f9e1f9394ffcb2978dc22a7f531064505202c29c34a6246e9ca4a4d12d235601\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:11:55.380271 containerd[1546]: time="2025-12-16T13:11:55.380228633Z" level=info msg="CreateContainer within sandbox \"6bcc0ca660596bd4155c984ee1b5beb35d5ad84c34e2fd57508045006e172a41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:11:55.394435 containerd[1546]: time="2025-12-16T13:11:55.394396244Z" level=info msg="Container 6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:55.395039 containerd[1546]: time="2025-12-16T13:11:55.394908202Z" level=info msg="Container 0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:55.401930 containerd[1546]: time="2025-12-16T13:11:55.401901849Z" level=info msg="CreateContainer within sandbox \"6bcc0ca660596bd4155c984ee1b5beb35d5ad84c34e2fd57508045006e172a41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b\"" Dec 16 13:11:55.402030 containerd[1546]: time="2025-12-16T13:11:55.401987768Z" level=info msg="CreateContainer within sandbox \"f9e1f9394ffcb2978dc22a7f531064505202c29c34a6246e9ca4a4d12d235601\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053\"" Dec 16 13:11:55.402959 containerd[1546]: time="2025-12-16T13:11:55.402921964Z" level=info msg="StartContainer for \"0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b\"" Dec 16 13:11:55.403351 containerd[1546]: time="2025-12-16T13:11:55.403328302Z" level=info msg="StartContainer for \"6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053\"" Dec 16 13:11:55.404459 containerd[1546]: time="2025-12-16T13:11:55.404348537Z" level=info msg="connecting to shim 0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b" address="unix:///run/containerd/s/9a1f52bd6e9e80e8a4d6e1d6e6fd9acf2088eb3e5c8c3f740ca8be77c79f9097" protocol=ttrpc version=3 Dec 16 13:11:55.404850 containerd[1546]: time="2025-12-16T13:11:55.404795545Z" level=info msg="connecting to shim 6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053" address="unix:///run/containerd/s/3a1bc0a32f400dce051828c1f9afc7156de6ee21af99ddaa32a3482e26a5f9aa" protocol=ttrpc version=3 Dec 16 13:11:55.428059 systemd[1]: Started cri-containerd-0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b.scope - libcontainer container 0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b. Dec 16 13:11:55.432075 systemd[1]: Started cri-containerd-6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053.scope - libcontainer container 6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053. Dec 16 13:11:55.467787 containerd[1546]: time="2025-12-16T13:11:55.467003617Z" level=info msg="StartContainer for \"0cb7b022854ded86ec135a158e433c46d1825b78add943649fc2c42e770e9c3b\" returns successfully" Dec 16 13:11:55.473699 containerd[1546]: time="2025-12-16T13:11:55.473318217Z" level=info msg="StartContainer for \"6c341622d3786f619a5e49de96bfe5a118a7dd21a259298146cd91ab9d917053\" returns successfully" Dec 16 13:11:56.207973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747333289.mount: Deactivated successfully. Dec 16 13:11:56.260522 kubelet[2747]: E1216 13:11:56.260203 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:56.263577 kubelet[2747]: E1216 13:11:56.263552 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:56.272131 kubelet[2747]: I1216 13:11:56.272079 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-drfsd" podStartSLOduration=17.272068931 podStartE2EDuration="17.272068931s" podCreationTimestamp="2025-12-16 13:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:56.270774397 +0000 UTC m=+25.229028585" watchObservedRunningTime="2025-12-16 13:11:56.272068931 +0000 UTC m=+25.230323129" Dec 16 13:11:56.294756 kubelet[2747]: I1216 13:11:56.294653 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zvbsr" podStartSLOduration=17.294640962 podStartE2EDuration="17.294640962s" podCreationTimestamp="2025-12-16 13:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:56.294300405 +0000 UTC m=+25.252554593" watchObservedRunningTime="2025-12-16 13:11:56.294640962 +0000 UTC m=+25.252895150" Dec 16 13:11:57.265544 kubelet[2747]: E1216 13:11:57.265184 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:57.265544 kubelet[2747]: E1216 13:11:57.265282 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:58.267520 kubelet[2747]: E1216 13:11:58.267404 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:11:58.267520 kubelet[2747]: E1216 13:11:58.267466 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:12:05.376209 kubelet[2747]: I1216 13:12:05.376141 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:12:05.377712 kubelet[2747]: E1216 13:12:05.377675 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:12:06.281587 kubelet[2747]: E1216 13:12:06.281553 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:12:41.145928 kubelet[2747]: E1216 13:12:41.145812 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:12:44.145772 kubelet[2747]: E1216 13:12:44.145735 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:01.146014 kubelet[2747]: E1216 13:13:01.145834 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:03.154686 kubelet[2747]: E1216 13:13:03.154588 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:12.145154 kubelet[2747]: E1216 13:13:12.145117 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:14.145700 kubelet[2747]: E1216 13:13:14.145670 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:15.146286 kubelet[2747]: E1216 13:13:15.145592 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:18.145638 kubelet[2747]: E1216 13:13:18.145593 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:24.894302 systemd[1]: Started sshd@8-172.239.197.230:22-139.178.89.65:55164.service - OpenSSH per-connection server daemon (139.178.89.65:55164). Dec 16 13:13:25.246975 sshd[4081]: Accepted publickey for core from 139.178.89.65 port 55164 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:25.248594 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:25.254310 systemd-logind[1526]: New session 8 of user core. Dec 16 13:13:25.257029 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:13:25.575091 sshd[4084]: Connection closed by 139.178.89.65 port 55164 Dec 16 13:13:25.575627 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:25.581824 systemd[1]: sshd@8-172.239.197.230:22-139.178.89.65:55164.service: Deactivated successfully. Dec 16 13:13:25.584703 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:13:25.585742 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:13:25.587464 systemd-logind[1526]: Removed session 8. Dec 16 13:13:30.647811 systemd[1]: Started sshd@9-172.239.197.230:22-139.178.89.65:37720.service - OpenSSH per-connection server daemon (139.178.89.65:37720). Dec 16 13:13:31.008525 sshd[4097]: Accepted publickey for core from 139.178.89.65 port 37720 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:31.010171 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:31.015972 systemd-logind[1526]: New session 9 of user core. Dec 16 13:13:31.023024 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:13:31.320679 sshd[4100]: Connection closed by 139.178.89.65 port 37720 Dec 16 13:13:31.321563 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:31.328182 systemd[1]: sshd@9-172.239.197.230:22-139.178.89.65:37720.service: Deactivated successfully. Dec 16 13:13:31.331560 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:13:31.334131 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:13:31.336651 systemd-logind[1526]: Removed session 9. Dec 16 13:13:36.380756 systemd[1]: Started sshd@10-172.239.197.230:22-139.178.89.65:37736.service - OpenSSH per-connection server daemon (139.178.89.65:37736). Dec 16 13:13:36.711664 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 37736 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:36.713073 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:36.717808 systemd-logind[1526]: New session 10 of user core. Dec 16 13:13:36.725004 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:13:37.014511 sshd[4118]: Connection closed by 139.178.89.65 port 37736 Dec 16 13:13:37.015129 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:37.019675 systemd[1]: sshd@10-172.239.197.230:22-139.178.89.65:37736.service: Deactivated successfully. Dec 16 13:13:37.021959 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:13:37.023463 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:13:37.024888 systemd-logind[1526]: Removed session 10. Dec 16 13:13:42.090678 systemd[1]: Started sshd@11-172.239.197.230:22-139.178.89.65:59968.service - OpenSSH per-connection server daemon (139.178.89.65:59968). Dec 16 13:13:42.439688 sshd[4133]: Accepted publickey for core from 139.178.89.65 port 59968 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:42.441160 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:42.445910 systemd-logind[1526]: New session 11 of user core. Dec 16 13:13:42.449970 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:13:42.758568 sshd[4136]: Connection closed by 139.178.89.65 port 59968 Dec 16 13:13:42.759747 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:42.764492 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:13:42.765229 systemd[1]: sshd@11-172.239.197.230:22-139.178.89.65:59968.service: Deactivated successfully. Dec 16 13:13:42.767733 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:13:42.770089 systemd-logind[1526]: Removed session 11. Dec 16 13:13:42.821061 systemd[1]: Started sshd@12-172.239.197.230:22-139.178.89.65:59970.service - OpenSSH per-connection server daemon (139.178.89.65:59970). Dec 16 13:13:43.161093 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 59970 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:43.163107 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:43.171369 systemd-logind[1526]: New session 12 of user core. Dec 16 13:13:43.173985 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:13:43.515041 sshd[4152]: Connection closed by 139.178.89.65 port 59970 Dec 16 13:13:43.515981 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:43.521395 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:13:43.522325 systemd[1]: sshd@12-172.239.197.230:22-139.178.89.65:59970.service: Deactivated successfully. Dec 16 13:13:43.525183 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:13:43.527375 systemd-logind[1526]: Removed session 12. Dec 16 13:13:43.581632 systemd[1]: Started sshd@13-172.239.197.230:22-139.178.89.65:59974.service - OpenSSH per-connection server daemon (139.178.89.65:59974). Dec 16 13:13:43.936823 sshd[4162]: Accepted publickey for core from 139.178.89.65 port 59974 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:43.938616 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:43.948561 systemd-logind[1526]: New session 13 of user core. Dec 16 13:13:43.955356 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:13:44.236163 sshd[4165]: Connection closed by 139.178.89.65 port 59974 Dec 16 13:13:44.237248 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:44.244201 systemd[1]: sshd@13-172.239.197.230:22-139.178.89.65:59974.service: Deactivated successfully. Dec 16 13:13:44.248054 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:13:44.250302 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:13:44.252139 systemd-logind[1526]: Removed session 13. Dec 16 13:13:49.307821 systemd[1]: Started sshd@14-172.239.197.230:22-139.178.89.65:59990.service - OpenSSH per-connection server daemon (139.178.89.65:59990). Dec 16 13:13:49.657890 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 59990 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:49.659443 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:49.665728 systemd-logind[1526]: New session 14 of user core. Dec 16 13:13:49.670972 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:13:49.967251 sshd[4180]: Connection closed by 139.178.89.65 port 59990 Dec 16 13:13:49.968195 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:49.976419 systemd[1]: sshd@14-172.239.197.230:22-139.178.89.65:59990.service: Deactivated successfully. Dec 16 13:13:49.979478 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:13:49.980931 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:13:49.983158 systemd-logind[1526]: Removed session 14. Dec 16 13:13:50.031515 systemd[1]: Started sshd@15-172.239.197.230:22-139.178.89.65:59994.service - OpenSSH per-connection server daemon (139.178.89.65:59994). Dec 16 13:13:50.400239 sshd[4192]: Accepted publickey for core from 139.178.89.65 port 59994 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:50.402159 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:50.408412 systemd-logind[1526]: New session 15 of user core. Dec 16 13:13:50.411033 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:13:50.721071 sshd[4195]: Connection closed by 139.178.89.65 port 59994 Dec 16 13:13:50.722687 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:50.728008 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:13:50.729131 systemd[1]: sshd@15-172.239.197.230:22-139.178.89.65:59994.service: Deactivated successfully. Dec 16 13:13:50.731161 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:13:50.732933 systemd-logind[1526]: Removed session 15. Dec 16 13:13:50.790224 systemd[1]: Started sshd@16-172.239.197.230:22-139.178.89.65:38368.service - OpenSSH per-connection server daemon (139.178.89.65:38368). Dec 16 13:13:51.146081 kubelet[2747]: E1216 13:13:51.145690 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:13:51.159432 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 38368 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:51.161080 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:51.168776 systemd-logind[1526]: New session 16 of user core. Dec 16 13:13:51.175995 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:13:51.854439 sshd[4207]: Connection closed by 139.178.89.65 port 38368 Dec 16 13:13:51.856029 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:51.859507 systemd[1]: sshd@16-172.239.197.230:22-139.178.89.65:38368.service: Deactivated successfully. Dec 16 13:13:51.861804 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:13:51.864266 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:13:51.865409 systemd-logind[1526]: Removed session 16. Dec 16 13:13:51.919355 systemd[1]: Started sshd@17-172.239.197.230:22-139.178.89.65:38372.service - OpenSSH per-connection server daemon (139.178.89.65:38372). Dec 16 13:13:52.271022 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 38372 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:52.273414 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:52.283192 systemd-logind[1526]: New session 17 of user core. Dec 16 13:13:52.292004 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:13:52.686359 sshd[4225]: Connection closed by 139.178.89.65 port 38372 Dec 16 13:13:52.686959 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:52.691357 systemd[1]: sshd@17-172.239.197.230:22-139.178.89.65:38372.service: Deactivated successfully. Dec 16 13:13:52.694271 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:13:52.695933 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:13:52.697437 systemd-logind[1526]: Removed session 17. Dec 16 13:13:52.755402 systemd[1]: Started sshd@18-172.239.197.230:22-139.178.89.65:38376.service - OpenSSH per-connection server daemon (139.178.89.65:38376). Dec 16 13:13:53.126470 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 38376 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:53.128516 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:53.134343 systemd-logind[1526]: New session 18 of user core. Dec 16 13:13:53.139985 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:13:53.446876 sshd[4239]: Connection closed by 139.178.89.65 port 38376 Dec 16 13:13:53.449123 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:53.454088 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:13:53.454974 systemd[1]: sshd@18-172.239.197.230:22-139.178.89.65:38376.service: Deactivated successfully. Dec 16 13:13:53.457432 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:13:53.459631 systemd-logind[1526]: Removed session 18. Dec 16 13:13:58.508009 systemd[1]: Started sshd@19-172.239.197.230:22-139.178.89.65:38390.service - OpenSSH per-connection server daemon (139.178.89.65:38390). Dec 16 13:13:58.843344 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 38390 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:13:58.844329 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:58.850759 systemd-logind[1526]: New session 19 of user core. Dec 16 13:13:58.856989 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:13:59.141091 sshd[4256]: Connection closed by 139.178.89.65 port 38390 Dec 16 13:13:59.143079 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:59.148008 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:13:59.149266 systemd[1]: sshd@19-172.239.197.230:22-139.178.89.65:38390.service: Deactivated successfully. Dec 16 13:13:59.151421 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:13:59.152789 systemd-logind[1526]: Removed session 19. Dec 16 13:14:04.206496 systemd[1]: Started sshd@20-172.239.197.230:22-139.178.89.65:43236.service - OpenSSH per-connection server daemon (139.178.89.65:43236). Dec 16 13:14:04.547297 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 43236 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:14:04.548914 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:04.554042 systemd-logind[1526]: New session 20 of user core. Dec 16 13:14:04.560990 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:14:04.845294 sshd[4270]: Connection closed by 139.178.89.65 port 43236 Dec 16 13:14:04.846287 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:04.850793 systemd[1]: sshd@20-172.239.197.230:22-139.178.89.65:43236.service: Deactivated successfully. Dec 16 13:14:04.853440 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:14:04.854577 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:14:04.861175 systemd-logind[1526]: Removed session 20. Dec 16 13:14:04.917660 systemd[1]: Started sshd@21-172.239.197.230:22-139.178.89.65:43250.service - OpenSSH per-connection server daemon (139.178.89.65:43250). Dec 16 13:14:05.276595 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 43250 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:14:05.278960 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:05.283908 systemd-logind[1526]: New session 21 of user core. Dec 16 13:14:05.288083 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:14:06.812447 containerd[1546]: time="2025-12-16T13:14:06.812065729Z" level=info msg="StopContainer for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" with timeout 30 (s)" Dec 16 13:14:06.816686 containerd[1546]: time="2025-12-16T13:14:06.814306399Z" level=info msg="Stop container \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" with signal terminated" Dec 16 13:14:06.839574 containerd[1546]: time="2025-12-16T13:14:06.839524152Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:14:06.841996 systemd[1]: cri-containerd-9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0.scope: Deactivated successfully. Dec 16 13:14:06.845529 containerd[1546]: time="2025-12-16T13:14:06.844731832Z" level=info msg="received container exit event container_id:\"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" id:\"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" pid:3301 exited_at:{seconds:1765890846 nanos:842490103}" Dec 16 13:14:06.862466 containerd[1546]: time="2025-12-16T13:14:06.862400005Z" level=info msg="StopContainer for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" with timeout 2 (s)" Dec 16 13:14:06.862993 containerd[1546]: time="2025-12-16T13:14:06.862700835Z" level=info msg="Stop container \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" with signal terminated" Dec 16 13:14:06.873565 systemd-networkd[1436]: lxc_health: Link DOWN Dec 16 13:14:06.873575 systemd-networkd[1436]: lxc_health: Lost carrier Dec 16 13:14:06.881472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0-rootfs.mount: Deactivated successfully. Dec 16 13:14:06.894802 systemd[1]: cri-containerd-cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06.scope: Deactivated successfully. Dec 16 13:14:06.895368 systemd[1]: cri-containerd-cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06.scope: Consumed 6.301s CPU time, 125.5M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:14:06.900568 containerd[1546]: time="2025-12-16T13:14:06.900384850Z" level=info msg="received container exit event container_id:\"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" id:\"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" pid:3413 exited_at:{seconds:1765890846 nanos:900226999}" Dec 16 13:14:06.904335 containerd[1546]: time="2025-12-16T13:14:06.904314539Z" level=info msg="StopContainer for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" returns successfully" Dec 16 13:14:06.905341 containerd[1546]: time="2025-12-16T13:14:06.905322619Z" level=info msg="StopPodSandbox for \"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\"" Dec 16 13:14:06.905485 containerd[1546]: time="2025-12-16T13:14:06.905467490Z" level=info msg="Container to stop \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:06.912631 systemd[1]: cri-containerd-ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4.scope: Deactivated successfully. Dec 16 13:14:06.919158 containerd[1546]: time="2025-12-16T13:14:06.919119311Z" level=info msg="received sandbox exit event container_id:\"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" id:\"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" exit_status:137 exited_at:{seconds:1765890846 nanos:918222031}" monitor_name=podsandbox Dec 16 13:14:06.935306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06-rootfs.mount: Deactivated successfully. Dec 16 13:14:06.942627 containerd[1546]: time="2025-12-16T13:14:06.942598754Z" level=info msg="StopContainer for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" returns successfully" Dec 16 13:14:06.943586 containerd[1546]: time="2025-12-16T13:14:06.943544324Z" level=info msg="StopPodSandbox for \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\"" Dec 16 13:14:06.943727 containerd[1546]: time="2025-12-16T13:14:06.943706624Z" level=info msg="Container to stop \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:06.943727 containerd[1546]: time="2025-12-16T13:14:06.943723564Z" level=info msg="Container to stop \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:06.943921 containerd[1546]: time="2025-12-16T13:14:06.943732234Z" level=info msg="Container to stop \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:06.944187 containerd[1546]: time="2025-12-16T13:14:06.943849064Z" level=info msg="Container to stop \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:06.944187 containerd[1546]: time="2025-12-16T13:14:06.944147354Z" level=info msg="Container to stop \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:14:06.954748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4-rootfs.mount: Deactivated successfully. Dec 16 13:14:06.959127 containerd[1546]: time="2025-12-16T13:14:06.959103426Z" level=info msg="shim disconnected" id=ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4 namespace=k8s.io Dec 16 13:14:06.959127 containerd[1546]: time="2025-12-16T13:14:06.959125646Z" level=warning msg="cleaning up after shim disconnected" id=ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4 namespace=k8s.io Dec 16 13:14:06.959273 containerd[1546]: time="2025-12-16T13:14:06.959134126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:14:06.963402 systemd[1]: cri-containerd-df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7.scope: Deactivated successfully. Dec 16 13:14:06.966609 containerd[1546]: time="2025-12-16T13:14:06.966586517Z" level=info msg="received sandbox exit event container_id:\"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" id:\"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" exit_status:137 exited_at:{seconds:1765890846 nanos:966381097}" monitor_name=podsandbox Dec 16 13:14:06.979990 containerd[1546]: time="2025-12-16T13:14:06.979965128Z" level=info msg="TearDown network for sandbox \"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" successfully" Dec 16 13:14:06.979990 containerd[1546]: time="2025-12-16T13:14:06.979986548Z" level=info msg="StopPodSandbox for \"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" returns successfully" Dec 16 13:14:06.980257 containerd[1546]: time="2025-12-16T13:14:06.980235738Z" level=info msg="received sandbox container exit event sandbox_id:\"ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4\" exit_status:137 exited_at:{seconds:1765890846 nanos:918222031}" monitor_name=criService Dec 16 13:14:06.981480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac74aee410f7a254a008404241319689338d0ad99eedbdcd6d9d9771396fc5e4-shm.mount: Deactivated successfully. Dec 16 13:14:07.001367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7-rootfs.mount: Deactivated successfully. Dec 16 13:14:07.011640 containerd[1546]: time="2025-12-16T13:14:07.011351802Z" level=info msg="shim disconnected" id=df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7 namespace=k8s.io Dec 16 13:14:07.012156 containerd[1546]: time="2025-12-16T13:14:07.011640322Z" level=warning msg="cleaning up after shim disconnected" id=df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7 namespace=k8s.io Dec 16 13:14:07.012156 containerd[1546]: time="2025-12-16T13:14:07.011649262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:14:07.026257 containerd[1546]: time="2025-12-16T13:14:07.026225844Z" level=info msg="received sandbox container exit event sandbox_id:\"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" exit_status:137 exited_at:{seconds:1765890846 nanos:966381097}" monitor_name=criService Dec 16 13:14:07.026482 containerd[1546]: time="2025-12-16T13:14:07.026459814Z" level=info msg="TearDown network for sandbox \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" successfully" Dec 16 13:14:07.026531 containerd[1546]: time="2025-12-16T13:14:07.026488044Z" level=info msg="StopPodSandbox for \"df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7\" returns successfully" Dec 16 13:14:07.111133 kubelet[2747]: I1216 13:14:07.110985 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-etc-cni-netd\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.111133 kubelet[2747]: I1216 13:14:07.111036 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-hostproc\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.111133 kubelet[2747]: I1216 13:14:07.111058 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-xtables-lock\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.111133 kubelet[2747]: I1216 13:14:07.111079 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-run\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.111133 kubelet[2747]: I1216 13:14:07.111108 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxvl9\" (UniqueName: \"kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-kube-api-access-fxvl9\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.111133 kubelet[2747]: I1216 13:14:07.111133 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfefc91b-949b-4810-ba60-3e2102621050-clustermesh-secrets\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112062 kubelet[2747]: I1216 13:14:07.111159 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6q2\" (UniqueName: \"kubernetes.io/projected/e43f38f7-2e85-4778-a7b4-285e063b1084-kube-api-access-sb6q2\") pod \"e43f38f7-2e85-4778-a7b4-285e063b1084\" (UID: \"e43f38f7-2e85-4778-a7b4-285e063b1084\") " Dec 16 13:14:07.112062 kubelet[2747]: I1216 13:14:07.111181 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e43f38f7-2e85-4778-a7b4-285e063b1084-cilium-config-path\") pod \"e43f38f7-2e85-4778-a7b4-285e063b1084\" (UID: \"e43f38f7-2e85-4778-a7b4-285e063b1084\") " Dec 16 13:14:07.112062 kubelet[2747]: I1216 13:14:07.111201 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-lib-modules\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112062 kubelet[2747]: I1216 13:14:07.111222 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-bpf-maps\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112062 kubelet[2747]: I1216 13:14:07.111243 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cni-path\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112062 kubelet[2747]: I1216 13:14:07.111270 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-kernel\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112264 kubelet[2747]: I1216 13:14:07.111292 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-net\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112264 kubelet[2747]: I1216 13:14:07.111316 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfefc91b-949b-4810-ba60-3e2102621050-cilium-config-path\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112264 kubelet[2747]: I1216 13:14:07.111339 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-hubble-tls\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112264 kubelet[2747]: I1216 13:14:07.111362 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-cgroup\") pod \"bfefc91b-949b-4810-ba60-3e2102621050\" (UID: \"bfefc91b-949b-4810-ba60-3e2102621050\") " Dec 16 13:14:07.112264 kubelet[2747]: I1216 13:14:07.111445 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.112422 kubelet[2747]: I1216 13:14:07.111491 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.112422 kubelet[2747]: I1216 13:14:07.111513 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-hostproc" (OuterVolumeSpecName: "hostproc") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.112422 kubelet[2747]: I1216 13:14:07.111531 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.112422 kubelet[2747]: I1216 13:14:07.111548 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.113273 kubelet[2747]: I1216 13:14:07.113231 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.115286 kubelet[2747]: I1216 13:14:07.115263 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cni-path" (OuterVolumeSpecName: "cni-path") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.115887 kubelet[2747]: I1216 13:14:07.115391 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.115887 kubelet[2747]: I1216 13:14:07.115423 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.119984 kubelet[2747]: I1216 13:14:07.119961 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:14:07.125772 kubelet[2747]: I1216 13:14:07.125749 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfefc91b-949b-4810-ba60-3e2102621050-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:14:07.126290 kubelet[2747]: I1216 13:14:07.126268 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:07.126475 kubelet[2747]: I1216 13:14:07.126455 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-kube-api-access-fxvl9" (OuterVolumeSpecName: "kube-api-access-fxvl9") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "kube-api-access-fxvl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:07.126631 kubelet[2747]: I1216 13:14:07.126601 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43f38f7-2e85-4778-a7b4-285e063b1084-kube-api-access-sb6q2" (OuterVolumeSpecName: "kube-api-access-sb6q2") pod "e43f38f7-2e85-4778-a7b4-285e063b1084" (UID: "e43f38f7-2e85-4778-a7b4-285e063b1084"). InnerVolumeSpecName "kube-api-access-sb6q2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:07.127878 kubelet[2747]: I1216 13:14:07.127831 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfefc91b-949b-4810-ba60-3e2102621050-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfefc91b-949b-4810-ba60-3e2102621050" (UID: "bfefc91b-949b-4810-ba60-3e2102621050"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:14:07.128556 kubelet[2747]: I1216 13:14:07.128532 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43f38f7-2e85-4778-a7b4-285e063b1084-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e43f38f7-2e85-4778-a7b4-285e063b1084" (UID: "e43f38f7-2e85-4778-a7b4-285e063b1084"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:14:07.157526 systemd[1]: Removed slice kubepods-burstable-podbfefc91b_949b_4810_ba60_3e2102621050.slice - libcontainer container kubepods-burstable-podbfefc91b_949b_4810_ba60_3e2102621050.slice. Dec 16 13:14:07.157664 systemd[1]: kubepods-burstable-podbfefc91b_949b_4810_ba60_3e2102621050.slice: Consumed 6.428s CPU time, 126M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:14:07.160361 systemd[1]: Removed slice kubepods-besteffort-pode43f38f7_2e85_4778_a7b4_285e063b1084.slice - libcontainer container kubepods-besteffort-pode43f38f7_2e85_4778_a7b4_285e063b1084.slice. Dec 16 13:14:07.212310 kubelet[2747]: I1216 13:14:07.212282 2747 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-kernel\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212310 kubelet[2747]: I1216 13:14:07.212308 2747 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-host-proc-sys-net\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212310 kubelet[2747]: I1216 13:14:07.212317 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfefc91b-949b-4810-ba60-3e2102621050-cilium-config-path\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212310 kubelet[2747]: I1216 13:14:07.212325 2747 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-hubble-tls\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212332 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-cgroup\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212339 2747 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-etc-cni-netd\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212347 2747 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-hostproc\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212354 2747 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-xtables-lock\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212361 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cilium-run\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212368 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxvl9\" (UniqueName: \"kubernetes.io/projected/bfefc91b-949b-4810-ba60-3e2102621050-kube-api-access-fxvl9\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212376 2747 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfefc91b-949b-4810-ba60-3e2102621050-clustermesh-secrets\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.212511 kubelet[2747]: I1216 13:14:07.212385 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sb6q2\" (UniqueName: \"kubernetes.io/projected/e43f38f7-2e85-4778-a7b4-285e063b1084-kube-api-access-sb6q2\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.213002 kubelet[2747]: I1216 13:14:07.212392 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e43f38f7-2e85-4778-a7b4-285e063b1084-cilium-config-path\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.213002 kubelet[2747]: I1216 13:14:07.212400 2747 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-lib-modules\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.213002 kubelet[2747]: I1216 13:14:07.212407 2747 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-bpf-maps\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.213002 kubelet[2747]: I1216 13:14:07.212413 2747 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfefc91b-949b-4810-ba60-3e2102621050-cni-path\") on node \"172-239-197-230\" DevicePath \"\"" Dec 16 13:14:07.528226 kubelet[2747]: I1216 13:14:07.527633 2747 scope.go:117] "RemoveContainer" containerID="9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0" Dec 16 13:14:07.532978 containerd[1546]: time="2025-12-16T13:14:07.532359732Z" level=info msg="RemoveContainer for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\"" Dec 16 13:14:07.541340 containerd[1546]: time="2025-12-16T13:14:07.541298594Z" level=info msg="RemoveContainer for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" returns successfully" Dec 16 13:14:07.542424 kubelet[2747]: I1216 13:14:07.542388 2747 scope.go:117] "RemoveContainer" containerID="9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0" Dec 16 13:14:07.544026 containerd[1546]: time="2025-12-16T13:14:07.543628714Z" level=error msg="ContainerStatus for \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\": not found" Dec 16 13:14:07.544254 kubelet[2747]: E1216 13:14:07.544234 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\": not found" containerID="9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0" Dec 16 13:14:07.544744 kubelet[2747]: I1216 13:14:07.544327 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0"} err="failed to get container status \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"9816b57c38c2687af6526a794523902598eecd0aa0886d95ea0679cb539195e0\": not found" Dec 16 13:14:07.544744 kubelet[2747]: I1216 13:14:07.544500 2747 scope.go:117] "RemoveContainer" containerID="cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06" Dec 16 13:14:07.552715 containerd[1546]: time="2025-12-16T13:14:07.552564306Z" level=info msg="RemoveContainer for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\"" Dec 16 13:14:07.561834 containerd[1546]: time="2025-12-16T13:14:07.561800536Z" level=info msg="RemoveContainer for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" returns successfully" Dec 16 13:14:07.563142 kubelet[2747]: I1216 13:14:07.563119 2747 scope.go:117] "RemoveContainer" containerID="b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d" Dec 16 13:14:07.566539 containerd[1546]: time="2025-12-16T13:14:07.566513677Z" level=info msg="RemoveContainer for \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\"" Dec 16 13:14:07.572151 containerd[1546]: time="2025-12-16T13:14:07.572130518Z" level=info msg="RemoveContainer for \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\" returns successfully" Dec 16 13:14:07.572305 kubelet[2747]: I1216 13:14:07.572273 2747 scope.go:117] "RemoveContainer" containerID="4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c" Dec 16 13:14:07.574370 containerd[1546]: time="2025-12-16T13:14:07.574343518Z" level=info msg="RemoveContainer for \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\"" Dec 16 13:14:07.577557 containerd[1546]: time="2025-12-16T13:14:07.577530269Z" level=info msg="RemoveContainer for \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\" returns successfully" Dec 16 13:14:07.577964 kubelet[2747]: I1216 13:14:07.577654 2747 scope.go:117] "RemoveContainer" containerID="b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c" Dec 16 13:14:07.579720 containerd[1546]: time="2025-12-16T13:14:07.579693559Z" level=info msg="RemoveContainer for \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\"" Dec 16 13:14:07.582878 containerd[1546]: time="2025-12-16T13:14:07.582657719Z" level=info msg="RemoveContainer for \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\" returns successfully" Dec 16 13:14:07.583680 kubelet[2747]: I1216 13:14:07.583337 2747 scope.go:117] "RemoveContainer" containerID="f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa" Dec 16 13:14:07.585892 containerd[1546]: time="2025-12-16T13:14:07.585789619Z" level=info msg="RemoveContainer for \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\"" Dec 16 13:14:07.590254 containerd[1546]: time="2025-12-16T13:14:07.589671900Z" level=info msg="RemoveContainer for \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\" returns successfully" Dec 16 13:14:07.590434 kubelet[2747]: I1216 13:14:07.590419 2747 scope.go:117] "RemoveContainer" containerID="cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06" Dec 16 13:14:07.590671 containerd[1546]: time="2025-12-16T13:14:07.590645510Z" level=error msg="ContainerStatus for \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\": not found" Dec 16 13:14:07.590841 kubelet[2747]: E1216 13:14:07.590805 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\": not found" containerID="cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06" Dec 16 13:14:07.591156 kubelet[2747]: I1216 13:14:07.591126 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06"} err="failed to get container status \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb03ffb6ed7b82e48b508ad043c2ad6db592f7f265d53c72c1042b7e5c336a06\": not found" Dec 16 13:14:07.591156 kubelet[2747]: I1216 13:14:07.591152 2747 scope.go:117] "RemoveContainer" containerID="b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d" Dec 16 13:14:07.591400 containerd[1546]: time="2025-12-16T13:14:07.591365700Z" level=error msg="ContainerStatus for \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\": not found" Dec 16 13:14:07.591498 kubelet[2747]: E1216 13:14:07.591480 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\": not found" containerID="b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d" Dec 16 13:14:07.591574 kubelet[2747]: I1216 13:14:07.591560 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d"} err="failed to get container status \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3768e3a3fa683d1140a5de932268dc3b219d6576c339b9ed484db49c157fb4d\": not found" Dec 16 13:14:07.591673 kubelet[2747]: I1216 13:14:07.591617 2747 scope.go:117] "RemoveContainer" containerID="4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c" Dec 16 13:14:07.592490 containerd[1546]: time="2025-12-16T13:14:07.592404860Z" level=error msg="ContainerStatus for \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\": not found" Dec 16 13:14:07.593093 kubelet[2747]: E1216 13:14:07.593050 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\": not found" containerID="4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c" Dec 16 13:14:07.593093 kubelet[2747]: I1216 13:14:07.593075 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c"} err="failed to get container status \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4083d14bf26f5681ad6e9f898685db49f0b6b229de5cbbd1c95ad36c8ae4475c\": not found" Dec 16 13:14:07.593167 kubelet[2747]: I1216 13:14:07.593108 2747 scope.go:117] "RemoveContainer" containerID="b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c" Dec 16 13:14:07.593366 containerd[1546]: time="2025-12-16T13:14:07.593319140Z" level=error msg="ContainerStatus for \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\": not found" Dec 16 13:14:07.593664 kubelet[2747]: E1216 13:14:07.593645 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\": not found" containerID="b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c" Dec 16 13:14:07.593700 kubelet[2747]: I1216 13:14:07.593666 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c"} err="failed to get container status \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b80522034ca65ea04ceefff5956dbf4d83b82175e20867e0c086c84edca28c4c\": not found" Dec 16 13:14:07.593700 kubelet[2747]: I1216 13:14:07.593680 2747 scope.go:117] "RemoveContainer" containerID="f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa" Dec 16 13:14:07.594402 containerd[1546]: time="2025-12-16T13:14:07.594346911Z" level=error msg="ContainerStatus for \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\": not found" Dec 16 13:14:07.594731 kubelet[2747]: E1216 13:14:07.594687 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\": not found" containerID="f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa" Dec 16 13:14:07.594830 kubelet[2747]: I1216 13:14:07.594814 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa"} err="failed to get container status \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8eea55ca1b40d50bd15be58a9a0c2f6be773be7814e4ecf5cdfc8a4189474aa\": not found" Dec 16 13:14:07.881196 systemd[1]: var-lib-kubelet-pods-e43f38f7\x2d2e85\x2d4778\x2da7b4\x2d285e063b1084-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsb6q2.mount: Deactivated successfully. Dec 16 13:14:07.881396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df421df16427edfd2fdeaf1be868b260871741edc6fa7edaa84b2fc40ce59ed7-shm.mount: Deactivated successfully. Dec 16 13:14:07.881535 systemd[1]: var-lib-kubelet-pods-bfefc91b\x2d949b\x2d4810\x2dba60\x2d3e2102621050-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxvl9.mount: Deactivated successfully. Dec 16 13:14:07.883668 systemd[1]: var-lib-kubelet-pods-bfefc91b\x2d949b\x2d4810\x2dba60\x2d3e2102621050-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:14:07.883817 systemd[1]: var-lib-kubelet-pods-bfefc91b\x2d949b\x2d4810\x2dba60\x2d3e2102621050-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:14:08.803032 sshd[4285]: Connection closed by 139.178.89.65 port 43250 Dec 16 13:14:08.803528 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:08.808265 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:14:08.808652 systemd[1]: sshd@21-172.239.197.230:22-139.178.89.65:43250.service: Deactivated successfully. Dec 16 13:14:08.810585 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:14:08.812492 systemd-logind[1526]: Removed session 21. Dec 16 13:14:08.867392 systemd[1]: Started sshd@22-172.239.197.230:22-139.178.89.65:43254.service - OpenSSH per-connection server daemon (139.178.89.65:43254). Dec 16 13:14:09.148172 kubelet[2747]: I1216 13:14:09.148123 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfefc91b-949b-4810-ba60-3e2102621050" path="/var/lib/kubelet/pods/bfefc91b-949b-4810-ba60-3e2102621050/volumes" Dec 16 13:14:09.149122 kubelet[2747]: I1216 13:14:09.149103 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43f38f7-2e85-4778-a7b4-285e063b1084" path="/var/lib/kubelet/pods/e43f38f7-2e85-4778-a7b4-285e063b1084/volumes" Dec 16 13:14:09.227434 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 43254 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:14:09.228907 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:09.234301 systemd-logind[1526]: New session 22 of user core. Dec 16 13:14:09.242159 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:14:09.837107 sshd[4432]: Connection closed by 139.178.89.65 port 43254 Dec 16 13:14:09.839208 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:09.843990 systemd[1]: sshd@22-172.239.197.230:22-139.178.89.65:43254.service: Deactivated successfully. Dec 16 13:14:09.851144 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:14:09.854067 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:14:09.860636 systemd-logind[1526]: Removed session 22. Dec 16 13:14:09.871182 systemd[1]: Created slice kubepods-burstable-pod12ed5384_3c98_421e_841e_9d90cb3bb74f.slice - libcontainer container kubepods-burstable-pod12ed5384_3c98_421e_841e_9d90cb3bb74f.slice. Dec 16 13:14:09.899155 systemd[1]: Started sshd@23-172.239.197.230:22-139.178.89.65:43266.service - OpenSSH per-connection server daemon (139.178.89.65:43266). Dec 16 13:14:09.926919 kubelet[2747]: I1216 13:14:09.926881 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-cilium-cgroup\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927011 kubelet[2747]: I1216 13:14:09.926922 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-lib-modules\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927011 kubelet[2747]: I1216 13:14:09.926938 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12ed5384-3c98-421e-841e-9d90cb3bb74f-cilium-ipsec-secrets\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927011 kubelet[2747]: I1216 13:14:09.926950 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-host-proc-sys-net\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927011 kubelet[2747]: I1216 13:14:09.926964 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12ed5384-3c98-421e-841e-9d90cb3bb74f-hubble-tls\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927011 kubelet[2747]: I1216 13:14:09.926977 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12ed5384-3c98-421e-841e-9d90cb3bb74f-clustermesh-secrets\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927147 kubelet[2747]: I1216 13:14:09.926989 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-host-proc-sys-kernel\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927147 kubelet[2747]: I1216 13:14:09.927002 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6g4p\" (UniqueName: \"kubernetes.io/projected/12ed5384-3c98-421e-841e-9d90cb3bb74f-kube-api-access-z6g4p\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927147 kubelet[2747]: I1216 13:14:09.927015 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-bpf-maps\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927147 kubelet[2747]: I1216 13:14:09.927026 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-hostproc\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927147 kubelet[2747]: I1216 13:14:09.927041 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-xtables-lock\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927147 kubelet[2747]: I1216 13:14:09.927053 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-cilium-run\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927293 kubelet[2747]: I1216 13:14:09.927067 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-etc-cni-netd\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927293 kubelet[2747]: I1216 13:14:09.927080 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12ed5384-3c98-421e-841e-9d90cb3bb74f-cilium-config-path\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:09.927293 kubelet[2747]: I1216 13:14:09.927093 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12ed5384-3c98-421e-841e-9d90cb3bb74f-cni-path\") pod \"cilium-t4xvx\" (UID: \"12ed5384-3c98-421e-841e-9d90cb3bb74f\") " pod="kube-system/cilium-t4xvx" Dec 16 13:14:10.145798 kubelet[2747]: E1216 13:14:10.145097 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:10.178342 kubelet[2747]: E1216 13:14:10.178302 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:10.179827 containerd[1546]: time="2025-12-16T13:14:10.179784242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4xvx,Uid:12ed5384-3c98-421e-841e-9d90cb3bb74f,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:10.197346 containerd[1546]: time="2025-12-16T13:14:10.197071405Z" level=info msg="connecting to shim acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811" address="unix:///run/containerd/s/50f7d25bee7affaca07266f6db2e6ecb84d2c62de7fc327f2eb59035aa3203c8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:10.229069 systemd[1]: Started cri-containerd-acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811.scope - libcontainer container acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811. Dec 16 13:14:10.257008 sshd[4445]: Accepted publickey for core from 139.178.89.65 port 43266 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:14:10.258545 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:10.265451 systemd-logind[1526]: New session 23 of user core. Dec 16 13:14:10.267643 containerd[1546]: time="2025-12-16T13:14:10.267432888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4xvx,Uid:12ed5384-3c98-421e-841e-9d90cb3bb74f,Namespace:kube-system,Attempt:0,} returns sandbox id \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\"" Dec 16 13:14:10.269224 kubelet[2747]: E1216 13:14:10.269170 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:10.272002 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:14:10.282582 containerd[1546]: time="2025-12-16T13:14:10.282239201Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:14:10.288484 containerd[1546]: time="2025-12-16T13:14:10.288463982Z" level=info msg="Container 7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:10.292323 containerd[1546]: time="2025-12-16T13:14:10.292272213Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564\"" Dec 16 13:14:10.294395 containerd[1546]: time="2025-12-16T13:14:10.294375173Z" level=info msg="StartContainer for \"7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564\"" Dec 16 13:14:10.295597 containerd[1546]: time="2025-12-16T13:14:10.295509053Z" level=info msg="connecting to shim 7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564" address="unix:///run/containerd/s/50f7d25bee7affaca07266f6db2e6ecb84d2c62de7fc327f2eb59035aa3203c8" protocol=ttrpc version=3 Dec 16 13:14:10.319986 systemd[1]: Started cri-containerd-7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564.scope - libcontainer container 7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564. Dec 16 13:14:10.350915 containerd[1546]: time="2025-12-16T13:14:10.350262694Z" level=info msg="StartContainer for \"7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564\" returns successfully" Dec 16 13:14:10.362236 systemd[1]: cri-containerd-7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564.scope: Deactivated successfully. Dec 16 13:14:10.366166 containerd[1546]: time="2025-12-16T13:14:10.366119326Z" level=info msg="received container exit event container_id:\"7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564\" id:\"7525ac67e3c2da911e49e8c9bb9d54a3d8881aeefa82958124ea8d1c08041564\" pid:4511 exited_at:{seconds:1765890850 nanos:365760917}" Dec 16 13:14:10.499608 sshd[4497]: Connection closed by 139.178.89.65 port 43266 Dec 16 13:14:10.499735 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:10.507234 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:14:10.508207 systemd[1]: sshd@23-172.239.197.230:22-139.178.89.65:43266.service: Deactivated successfully. Dec 16 13:14:10.511019 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:14:10.513941 systemd-logind[1526]: Removed session 23. Dec 16 13:14:10.552122 kubelet[2747]: E1216 13:14:10.552073 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:10.556625 containerd[1546]: time="2025-12-16T13:14:10.556551473Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:14:10.565755 containerd[1546]: time="2025-12-16T13:14:10.565702015Z" level=info msg="Container bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:10.566765 systemd[1]: Started sshd@24-172.239.197.230:22-139.178.89.65:53156.service - OpenSSH per-connection server daemon (139.178.89.65:53156). Dec 16 13:14:10.573997 containerd[1546]: time="2025-12-16T13:14:10.573949236Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb\"" Dec 16 13:14:10.574738 containerd[1546]: time="2025-12-16T13:14:10.574700797Z" level=info msg="StartContainer for \"bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb\"" Dec 16 13:14:10.578233 containerd[1546]: time="2025-12-16T13:14:10.578195187Z" level=info msg="connecting to shim bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb" address="unix:///run/containerd/s/50f7d25bee7affaca07266f6db2e6ecb84d2c62de7fc327f2eb59035aa3203c8" protocol=ttrpc version=3 Dec 16 13:14:10.621010 systemd[1]: Started cri-containerd-bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb.scope - libcontainer container bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb. Dec 16 13:14:10.658086 containerd[1546]: time="2025-12-16T13:14:10.658048472Z" level=info msg="StartContainer for \"bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb\" returns successfully" Dec 16 13:14:10.668554 systemd[1]: cri-containerd-bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb.scope: Deactivated successfully. Dec 16 13:14:10.669614 containerd[1546]: time="2025-12-16T13:14:10.669053564Z" level=info msg="received container exit event container_id:\"bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb\" id:\"bbe946eeffe8132532a40c8103a55d46d8089f2ebef664bebaad66cd88dcd3eb\" pid:4564 exited_at:{seconds:1765890850 nanos:668713845}" Dec 16 13:14:10.931615 sshd[4548]: Accepted publickey for core from 139.178.89.65 port 53156 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:14:10.933096 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:10.939150 systemd-logind[1526]: New session 24 of user core. Dec 16 13:14:10.945980 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:14:11.227384 kubelet[2747]: E1216 13:14:11.227274 2747 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:14:11.555499 kubelet[2747]: E1216 13:14:11.555386 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:11.559150 containerd[1546]: time="2025-12-16T13:14:11.559082971Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:14:11.574803 containerd[1546]: time="2025-12-16T13:14:11.573992144Z" level=info msg="Container 4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:11.582304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133318184.mount: Deactivated successfully. Dec 16 13:14:11.585483 containerd[1546]: time="2025-12-16T13:14:11.585346256Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f\"" Dec 16 13:14:11.588171 containerd[1546]: time="2025-12-16T13:14:11.588138927Z" level=info msg="StartContainer for \"4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f\"" Dec 16 13:14:11.590094 containerd[1546]: time="2025-12-16T13:14:11.589998027Z" level=info msg="connecting to shim 4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f" address="unix:///run/containerd/s/50f7d25bee7affaca07266f6db2e6ecb84d2c62de7fc327f2eb59035aa3203c8" protocol=ttrpc version=3 Dec 16 13:14:11.617007 systemd[1]: Started cri-containerd-4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f.scope - libcontainer container 4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f. Dec 16 13:14:11.685848 containerd[1546]: time="2025-12-16T13:14:11.685803997Z" level=info msg="StartContainer for \"4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f\" returns successfully" Dec 16 13:14:11.688161 systemd[1]: cri-containerd-4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f.scope: Deactivated successfully. Dec 16 13:14:11.692648 containerd[1546]: time="2025-12-16T13:14:11.692558829Z" level=info msg="received container exit event container_id:\"4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f\" id:\"4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f\" pid:4616 exited_at:{seconds:1765890851 nanos:691713128}" Dec 16 13:14:11.712896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c466e3b3e2fd418e8f7ddf7ce1561288b4c15be1ee117da8e1e425853f46b3f-rootfs.mount: Deactivated successfully. Dec 16 13:14:12.561025 kubelet[2747]: E1216 13:14:12.560807 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:12.565435 containerd[1546]: time="2025-12-16T13:14:12.565394106Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:14:12.577088 containerd[1546]: time="2025-12-16T13:14:12.577007408Z" level=info msg="Container 2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:12.584121 containerd[1546]: time="2025-12-16T13:14:12.584056430Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340\"" Dec 16 13:14:12.584873 containerd[1546]: time="2025-12-16T13:14:12.584836981Z" level=info msg="StartContainer for \"2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340\"" Dec 16 13:14:12.586037 containerd[1546]: time="2025-12-16T13:14:12.585991610Z" level=info msg="connecting to shim 2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340" address="unix:///run/containerd/s/50f7d25bee7affaca07266f6db2e6ecb84d2c62de7fc327f2eb59035aa3203c8" protocol=ttrpc version=3 Dec 16 13:14:12.624013 systemd[1]: Started cri-containerd-2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340.scope - libcontainer container 2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340. Dec 16 13:14:12.670331 systemd[1]: cri-containerd-2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340.scope: Deactivated successfully. Dec 16 13:14:12.672186 containerd[1546]: time="2025-12-16T13:14:12.671515889Z" level=info msg="received container exit event container_id:\"2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340\" id:\"2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340\" pid:4654 exited_at:{seconds:1765890852 nanos:671250300}" Dec 16 13:14:12.684092 containerd[1546]: time="2025-12-16T13:14:12.684039322Z" level=info msg="StartContainer for \"2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340\" returns successfully" Dec 16 13:14:12.705032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2025079948a434e73e68036fe4384120fd5b15459c6761523c8d762130a38340-rootfs.mount: Deactivated successfully. Dec 16 13:14:13.570355 kubelet[2747]: E1216 13:14:13.569730 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:13.575642 containerd[1546]: time="2025-12-16T13:14:13.575605869Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:14:13.589562 containerd[1546]: time="2025-12-16T13:14:13.587475472Z" level=info msg="Container 9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:13.594234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156719795.mount: Deactivated successfully. Dec 16 13:14:13.598005 containerd[1546]: time="2025-12-16T13:14:13.597771644Z" level=info msg="CreateContainer within sandbox \"acee396d5cbb127750d67e7390e14f68a240707b23c27da074fcdd2b4bcd5811\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f\"" Dec 16 13:14:13.598472 containerd[1546]: time="2025-12-16T13:14:13.598400345Z" level=info msg="StartContainer for \"9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f\"" Dec 16 13:14:13.599183 containerd[1546]: time="2025-12-16T13:14:13.599155404Z" level=info msg="connecting to shim 9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f" address="unix:///run/containerd/s/50f7d25bee7affaca07266f6db2e6ecb84d2c62de7fc327f2eb59035aa3203c8" protocol=ttrpc version=3 Dec 16 13:14:13.627211 systemd[1]: Started cri-containerd-9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f.scope - libcontainer container 9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f. Dec 16 13:14:13.684401 containerd[1546]: time="2025-12-16T13:14:13.684346285Z" level=info msg="StartContainer for \"9b52a3913efa3544a103bcebde14c67f080049504a04ae7ca27f1522c47a750f\" returns successfully" Dec 16 13:14:14.129158 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:14:14.479161 kubelet[2747]: I1216 13:14:14.479119 2747 setters.go:543] "Node became not ready" node="172-239-197-230" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:14:14Z","lastTransitionTime":"2025-12-16T13:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:14:14.576675 kubelet[2747]: E1216 13:14:14.576639 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:16.177649 kubelet[2747]: E1216 13:14:16.177610 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:16.934800 systemd-networkd[1436]: lxc_health: Link UP Dec 16 13:14:16.943142 systemd-networkd[1436]: lxc_health: Gained carrier Dec 16 13:14:18.008248 systemd-networkd[1436]: lxc_health: Gained IPv6LL Dec 16 13:14:18.182038 kubelet[2747]: E1216 13:14:18.181988 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:18.219403 kubelet[2747]: I1216 13:14:18.219341 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t4xvx" podStartSLOduration=9.219327681 podStartE2EDuration="9.219327681s" podCreationTimestamp="2025-12-16 13:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:14.59014169 +0000 UTC m=+163.548395878" watchObservedRunningTime="2025-12-16 13:14:18.219327681 +0000 UTC m=+167.177581869" Dec 16 13:14:18.587519 kubelet[2747]: E1216 13:14:18.587483 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:19.591023 kubelet[2747]: E1216 13:14:19.590981 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:21.145872 kubelet[2747]: E1216 13:14:21.145508 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Dec 16 13:14:23.879477 sshd[4596]: Connection closed by 139.178.89.65 port 53156 Dec 16 13:14:23.880388 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:23.886304 systemd[1]: sshd@24-172.239.197.230:22-139.178.89.65:53156.service: Deactivated successfully. Dec 16 13:14:23.889604 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:14:23.891527 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:14:23.893677 systemd-logind[1526]: Removed session 24.