Dec 16 13:12:32.093059 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:12:32.093099 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:12:32.093120 kernel: BIOS-provided physical RAM map: Dec 16 13:12:32.093135 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 16 13:12:32.093147 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 16 13:12:32.093161 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 16 13:12:32.093178 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 16 13:12:32.093192 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 16 13:12:32.093206 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Dec 16 13:12:32.093241 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Dec 16 13:12:32.093255 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Dec 16 13:12:32.093269 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 16 13:12:32.093282 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 16 13:12:32.093297 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 16 13:12:32.093314 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 16 13:12:32.093333 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 16 13:12:32.093356 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 16 13:12:32.093372 kernel: NX (Execute Disable) protection: active Dec 16 13:12:32.093388 kernel: APIC: Static calls initialized Dec 16 13:12:32.093403 kernel: efi: EFI v2.7 by EDK II Dec 16 13:12:32.093419 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Dec 16 13:12:32.093434 kernel: random: crng init done Dec 16 13:12:32.093450 kernel: secureboot: Secure boot disabled Dec 16 13:12:32.093465 kernel: SMBIOS 2.4 present. Dec 16 13:12:32.093480 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Dec 16 13:12:32.093499 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:12:32.093514 kernel: Hypervisor detected: KVM Dec 16 13:12:32.093529 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 13:12:32.093544 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:12:32.093560 kernel: kvm-clock: using sched offset of 15611299661 cycles Dec 16 13:12:32.093577 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:12:32.093593 kernel: tsc: Detected 2299.998 MHz processor Dec 16 13:12:32.093608 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:12:32.093624 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:12:32.093639 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 16 13:12:32.093658 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 16 13:12:32.093674 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:12:32.093690 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 13:12:32.093705 kernel: Using GB pages for direct mapping Dec 16 13:12:32.093721 kernel: ACPI: Early table checksum verification disabled Dec 16 13:12:32.093744 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 16 13:12:32.093761 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 16 13:12:32.093782 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 16 13:12:32.093798 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 16 13:12:32.093815 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 16 13:12:32.093832 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Dec 16 13:12:32.093848 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 16 13:12:32.093865 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 16 13:12:32.093882 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 16 13:12:32.093902 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 16 13:12:32.093919 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 16 13:12:32.093935 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 16 13:12:32.093952 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 16 13:12:32.093968 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 16 13:12:32.093984 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 16 13:12:32.094001 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 16 13:12:32.094017 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 16 13:12:32.094034 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 16 13:12:32.094054 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 16 13:12:32.094069 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 16 13:12:32.094096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 13:12:32.094112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 16 13:12:32.094130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 16 13:12:32.094147 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Dec 16 13:12:32.094164 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Dec 16 13:12:32.094182 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Dec 16 13:12:32.094199 kernel: Zone ranges: Dec 16 13:12:32.094254 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:12:32.094273 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:12:32.094291 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 16 13:12:32.094308 kernel: Device empty Dec 16 13:12:32.094326 kernel: Movable zone start for each node Dec 16 13:12:32.094350 kernel: Early memory node ranges Dec 16 13:12:32.094368 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 16 13:12:32.094385 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 16 13:12:32.094403 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Dec 16 13:12:32.094425 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Dec 16 13:12:32.094442 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 16 13:12:32.094459 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 16 13:12:32.094477 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 16 13:12:32.094494 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:12:32.094511 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 16 13:12:32.094529 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 16 13:12:32.094547 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 16 13:12:32.094564 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:12:32.094585 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 16 13:12:32.094602 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:12:32.094620 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:12:32.094637 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:12:32.094655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:12:32.094672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:12:32.094690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:12:32.094707 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:12:32.094725 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:12:32.094746 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:12:32.094763 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:12:32.094780 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:12:32.094798 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:12:32.094815 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:12:32.094833 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:12:32.094850 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:12:32.094868 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 16 13:12:32.094885 kernel: Booting paravirtualized kernel on KVM Dec 16 13:12:32.094903 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:12:32.094925 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:12:32.094942 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:12:32.094974 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:12:32.094990 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:12:32.095008 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:12:32.095025 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:12:32.095045 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:12:32.095063 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:12:32.095084 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:12:32.095101 kernel: Fallback order for Node 0: 0 Dec 16 13:12:32.095118 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Dec 16 13:12:32.095136 kernel: Policy zone: Normal Dec 16 13:12:32.095153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:12:32.095171 kernel: software IO TLB: area num 2. Dec 16 13:12:32.095202 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:12:32.095245 kernel: Kernel/User page tables isolation: enabled Dec 16 13:12:32.095264 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:12:32.095282 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:12:32.095301 kernel: Dynamic Preempt: voluntary Dec 16 13:12:32.095320 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:12:32.095351 kernel: rcu: RCU event tracing is enabled. Dec 16 13:12:32.095370 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:12:32.095389 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:12:32.095408 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:12:32.095431 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:12:32.095449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:12:32.095468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:12:32.095487 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:12:32.095506 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:12:32.095525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:12:32.095544 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:12:32.095563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:12:32.095581 kernel: Console: colour dummy device 80x25 Dec 16 13:12:32.095604 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:12:32.095623 kernel: ACPI: Core revision 20240827 Dec 16 13:12:32.095642 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:12:32.095660 kernel: x2apic enabled Dec 16 13:12:32.095679 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:12:32.095698 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 16 13:12:32.095717 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:12:32.095735 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 13:12:32.095754 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 16 13:12:32.095776 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 16 13:12:32.095795 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:12:32.095814 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 16 13:12:32.095832 kernel: Spectre V2 : Mitigation: IBRS Dec 16 13:12:32.095851 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:12:32.095869 kernel: RETBleed: Mitigation: IBRS Dec 16 13:12:32.095888 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:12:32.095906 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 16 13:12:32.095926 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:12:32.095948 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 13:12:32.095966 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:12:32.095985 kernel: active return thunk: its_return_thunk Dec 16 13:12:32.096004 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:12:32.096022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:12:32.096042 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:12:32.096060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:12:32.096079 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:12:32.096097 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 13:12:32.096118 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:12:32.096136 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:12:32.096155 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:12:32.096174 kernel: landlock: Up and running. Dec 16 13:12:32.096193 kernel: SELinux: Initializing. Dec 16 13:12:32.096211 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:12:32.096249 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:12:32.096272 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 16 13:12:32.096289 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 16 13:12:32.096311 kernel: signal: max sigframe size: 1776 Dec 16 13:12:32.096327 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:12:32.096353 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:12:32.096370 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:12:32.096388 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:12:32.096406 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:12:32.096424 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:12:32.096442 kernel: .... node #0, CPUs: #1 Dec 16 13:12:32.096460 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:12:32.096484 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:12:32.096501 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:12:32.096517 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:12:32.096534 kernel: Memory: 7555812K/7860544K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 298900K reserved, 0K cma-reserved) Dec 16 13:12:32.096551 kernel: devtmpfs: initialized Dec 16 13:12:32.096568 kernel: x86/mm: Memory block size: 128MB Dec 16 13:12:32.096584 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 16 13:12:32.096601 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:12:32.096624 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:12:32.096641 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:12:32.096659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:12:32.096676 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:12:32.096695 kernel: audit: type=2000 audit(1765890746.759:1): state=initialized audit_enabled=0 res=1 Dec 16 13:12:32.096713 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:12:32.096729 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:12:32.096747 kernel: cpuidle: using governor menu Dec 16 13:12:32.096765 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:12:32.096789 kernel: dca service started, version 1.12.1 Dec 16 13:12:32.096807 kernel: PCI: Using configuration type 1 for base access Dec 16 13:12:32.096827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:12:32.096846 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:12:32.096864 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:12:32.096884 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:12:32.096903 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:12:32.096922 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:12:32.096940 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:12:32.096962 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:12:32.096982 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:12:32.097000 kernel: ACPI: Interpreter enabled Dec 16 13:12:32.097020 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:12:32.097039 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:12:32.097058 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:12:32.097076 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:12:32.097095 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 16 13:12:32.097114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:12:32.097429 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:12:32.097627 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:12:32.097811 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:12:32.097834 kernel: PCI host bridge to bus 0000:00 Dec 16 13:12:32.098011 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:12:32.098183 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:12:32.098402 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:12:32.098576 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 16 13:12:32.098768 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:12:32.098976 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:12:32.099187 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:12:32.099425 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 13:12:32.099612 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:12:32.099813 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:12:32.100007 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 13:12:32.100194 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Dec 16 13:12:32.100428 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:12:32.100633 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Dec 16 13:12:32.100822 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Dec 16 13:12:32.101023 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:12:32.101213 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Dec 16 13:12:32.101478 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Dec 16 13:12:32.101503 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:12:32.101523 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:12:32.101543 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:12:32.101562 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:12:32.101581 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:12:32.101605 kernel: iommu: Default domain type: Translated Dec 16 13:12:32.101625 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:12:32.101643 kernel: efivars: Registered efivars operations Dec 16 13:12:32.101662 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:12:32.101681 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:12:32.101700 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 16 13:12:32.101719 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 16 13:12:32.101737 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Dec 16 13:12:32.101755 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 16 13:12:32.101778 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 16 13:12:32.101797 kernel: vgaarb: loaded Dec 16 13:12:32.101815 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:12:32.101835 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:12:32.101853 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:12:32.101872 kernel: pnp: PnP ACPI init Dec 16 13:12:32.101891 kernel: pnp: PnP ACPI: found 7 devices Dec 16 13:12:32.101910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:12:32.101930 kernel: NET: Registered PF_INET protocol family Dec 16 13:12:32.101953 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:12:32.101972 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:12:32.101992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:12:32.102012 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:12:32.102031 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:12:32.102050 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:12:32.102069 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:12:32.102089 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:12:32.102107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:12:32.102130 kernel: NET: Registered PF_XDP protocol family Dec 16 13:12:32.102325 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:12:32.102507 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:12:32.102674 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:12:32.102841 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 16 13:12:32.103031 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:12:32.103056 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:12:32.103082 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:12:32.103100 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 16 13:12:32.103119 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:12:32.103139 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:12:32.103158 kernel: clocksource: Switched to clocksource tsc Dec 16 13:12:32.103177 kernel: Initialise system trusted keyrings Dec 16 13:12:32.103196 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:12:32.103216 kernel: Key type asymmetric registered Dec 16 13:12:32.103258 kernel: Asymmetric key parser 'x509' registered Dec 16 13:12:32.103283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:12:32.103302 kernel: io scheduler mq-deadline registered Dec 16 13:12:32.103321 kernel: io scheduler kyber registered Dec 16 13:12:32.103347 kernel: io scheduler bfq registered Dec 16 13:12:32.103366 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:12:32.103386 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 13:12:32.103586 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 16 13:12:32.103610 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 16 13:12:32.103796 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 16 13:12:32.103825 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 13:12:32.104009 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 16 13:12:32.104033 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:12:32.104052 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:12:32.104071 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:12:32.104090 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 16 13:12:32.104109 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 16 13:12:32.104328 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 16 13:12:32.104370 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:12:32.104390 kernel: i8042: Warning: Keylock active Dec 16 13:12:32.104407 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:12:32.104423 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:12:32.104609 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:12:32.104778 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:12:32.105347 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:12:31 UTC (1765890751) Dec 16 13:12:32.105527 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:12:32.105550 kernel: intel_pstate: CPU model not supported Dec 16 13:12:32.105568 kernel: pstore: Using crash dump compression: deflate Dec 16 13:12:32.105587 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:12:32.105605 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:12:32.105622 kernel: Segment Routing with IPv6 Dec 16 13:12:32.105661 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:12:32.105677 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:12:32.105694 kernel: Key type dns_resolver registered Dec 16 13:12:32.105711 kernel: IPI shorthand broadcast: enabled Dec 16 13:12:32.105734 kernel: sched_clock: Marking stable (3892004112, 832132900)->(5014070515, -289933503) Dec 16 13:12:32.105753 kernel: registered taskstats version 1 Dec 16 13:12:32.105771 kernel: Loading compiled-in X.509 certificates Dec 16 13:12:32.105789 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:12:32.105807 kernel: Demotion targets for Node 0: null Dec 16 13:12:32.105825 kernel: Key type .fscrypt registered Dec 16 13:12:32.105842 kernel: Key type fscrypt-provisioning registered Dec 16 13:12:32.105859 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:12:32.105879 kernel: ima: No architecture policies found Dec 16 13:12:32.105902 kernel: clk: Disabling unused clocks Dec 16 13:12:32.105919 kernel: Warning: unable to open an initial console. Dec 16 13:12:32.105936 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:12:32.105953 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:12:32.105970 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:12:32.105988 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:12:32.106005 kernel: Run /init as init process Dec 16 13:12:32.106023 kernel: with arguments: Dec 16 13:12:32.106042 kernel: /init Dec 16 13:12:32.106063 kernel: with environment: Dec 16 13:12:32.106081 kernel: HOME=/ Dec 16 13:12:32.106097 kernel: TERM=linux Dec 16 13:12:32.106117 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:12:32.106139 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:12:32.106159 systemd[1]: Detected virtualization google. Dec 16 13:12:32.106176 systemd[1]: Detected architecture x86-64. Dec 16 13:12:32.106200 systemd[1]: Running in initrd. Dec 16 13:12:32.106237 systemd[1]: No hostname configured, using default hostname. Dec 16 13:12:32.106257 systemd[1]: Hostname set to . Dec 16 13:12:32.106276 systemd[1]: Initializing machine ID from random generator. Dec 16 13:12:32.106295 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:12:32.106315 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:12:32.106935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:12:32.106964 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:12:32.106985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:12:32.107006 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:12:32.107029 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:12:32.107051 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:12:32.107076 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:12:32.107096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:12:32.107117 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:12:32.107138 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:12:32.107159 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:12:32.107179 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:12:32.107203 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:12:32.107278 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:12:32.107300 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:12:32.107326 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:12:32.107354 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:12:32.107375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:12:32.107396 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:12:32.107417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:12:32.107438 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:12:32.107459 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:12:32.107478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:12:32.107503 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:12:32.107524 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:12:32.107545 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:12:32.107565 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:12:32.107586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:12:32.107614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:32.107635 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:12:32.107702 systemd-journald[191]: Collecting audit messages is disabled. Dec 16 13:12:32.107746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:12:32.107772 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:12:32.107794 systemd-journald[191]: Journal started Dec 16 13:12:32.107840 systemd-journald[191]: Runtime Journal (/run/log/journal/35d5c7086954428abdbe6599fa3dddd3) is 8M, max 148.6M, 140.6M free. Dec 16 13:12:32.115344 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:12:32.123426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:12:32.126441 systemd-modules-load[193]: Inserted module 'overlay' Dec 16 13:12:32.135401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:12:32.150287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:32.159502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:12:32.169432 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:12:32.171514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:12:32.182482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:12:32.191401 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:12:32.187998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:12:32.196028 kernel: Bridge firewalling registered Dec 16 13:12:32.193048 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 16 13:12:32.199601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:12:32.211430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:12:32.212191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:12:32.234240 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:12:32.240616 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:32.245126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:12:32.262413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:12:32.282117 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:12:32.335413 systemd-resolved[232]: Positive Trust Anchors: Dec 16 13:12:32.335434 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:12:32.335510 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:12:32.340511 systemd-resolved[232]: Defaulting to hostname 'linux'. Dec 16 13:12:32.342128 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:12:32.355494 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:12:32.410276 kernel: SCSI subsystem initialized Dec 16 13:12:32.423256 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:12:32.434259 kernel: iscsi: registered transport (tcp) Dec 16 13:12:32.459434 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:12:32.459521 kernel: QLogic iSCSI HBA Driver Dec 16 13:12:32.483264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:12:32.506537 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:12:32.513160 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:12:32.572427 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:12:32.575446 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:12:32.633258 kernel: raid6: avx2x4 gen() 18294 MB/s Dec 16 13:12:32.650253 kernel: raid6: avx2x2 gen() 18243 MB/s Dec 16 13:12:32.667682 kernel: raid6: avx2x1 gen() 14088 MB/s Dec 16 13:12:32.667732 kernel: raid6: using algorithm avx2x4 gen() 18294 MB/s Dec 16 13:12:32.685696 kernel: raid6: .... xor() 7984 MB/s, rmw enabled Dec 16 13:12:32.685739 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:12:32.708268 kernel: xor: automatically using best checksumming function avx Dec 16 13:12:32.897335 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:12:32.908120 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:12:32.913755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:12:32.951372 systemd-udevd[441]: Using default interface naming scheme 'v255'. Dec 16 13:12:32.961581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:12:32.968725 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:12:33.008564 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 16 13:12:33.045060 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:12:33.052556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:12:33.155009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:12:33.160502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:12:33.282252 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:12:33.290246 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Dec 16 13:12:33.355255 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:12:33.370263 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 16 13:12:33.377260 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 13:12:33.387766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:12:33.387985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:33.392699 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:33.405085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:33.410937 kernel: AES CTR mode by8 optimization enabled Dec 16 13:12:33.408826 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:33.442610 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Dec 16 13:12:33.442984 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 16 13:12:33.447517 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 16 13:12:33.447862 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 16 13:12:33.450296 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:12:33.460626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:33.469267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:12:33.469341 kernel: GPT:17805311 != 33554431 Dec 16 13:12:33.469366 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:12:33.469390 kernel: GPT:17805311 != 33554431 Dec 16 13:12:33.469411 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:12:33.469432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:12:33.471246 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 16 13:12:33.585243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 16 13:12:33.585968 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:12:33.617146 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 16 13:12:33.638983 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 16 13:12:33.639269 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 16 13:12:33.660451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 13:12:33.663436 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:12:33.667348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:12:33.671339 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:12:33.676481 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:12:33.689403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:12:33.700485 disk-uuid[594]: Primary Header is updated. Dec 16 13:12:33.700485 disk-uuid[594]: Secondary Entries is updated. Dec 16 13:12:33.700485 disk-uuid[594]: Secondary Header is updated. Dec 16 13:12:33.714655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:12:33.719493 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:12:34.750183 disk-uuid[595]: The operation has completed successfully. Dec 16 13:12:34.752366 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:12:34.825051 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:12:34.825202 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:12:34.877989 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:12:34.903438 sh[616]: Success Dec 16 13:12:34.927355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:12:34.927433 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:12:34.927472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:12:34.938241 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:12:35.018725 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:12:35.023083 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:12:35.041921 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:12:35.060260 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (628) Dec 16 13:12:35.063541 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:12:35.063593 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:35.093831 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:12:35.093900 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:12:35.093931 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:12:35.097820 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:12:35.102356 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:12:35.104713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:12:35.106875 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:12:35.115653 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:12:35.150254 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (657) Dec 16 13:12:35.153709 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:35.153771 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:35.167932 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:12:35.168014 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:12:35.168040 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:12:35.175265 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:35.177295 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:12:35.182705 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:12:35.295311 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:12:35.309814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:12:35.429369 systemd-networkd[797]: lo: Link UP Dec 16 13:12:35.429945 systemd-networkd[797]: lo: Gained carrier Dec 16 13:12:35.434037 systemd-networkd[797]: Enumeration completed Dec 16 13:12:35.434376 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:12:35.434922 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:35.435058 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:12:35.437454 systemd[1]: Reached target network.target - Network. Dec 16 13:12:35.450865 ignition[722]: Ignition 2.22.0 Dec 16 13:12:35.438260 systemd-networkd[797]: eth0: Link UP Dec 16 13:12:35.450877 ignition[722]: Stage: fetch-offline Dec 16 13:12:35.438512 systemd-networkd[797]: eth0: Gained carrier Dec 16 13:12:35.450930 ignition[722]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:35.438530 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:35.450944 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:35.452313 systemd-networkd[797]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 13:12:35.451076 ignition[722]: parsed url from cmdline: "" Dec 16 13:12:35.454117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:12:35.451083 ignition[722]: no config URL provided Dec 16 13:12:35.463558 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:12:35.451092 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:12:35.451104 ignition[722]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:12:35.451116 ignition[722]: failed to fetch config: resource requires networking Dec 16 13:12:35.451846 ignition[722]: Ignition finished successfully Dec 16 13:12:35.507136 ignition[806]: Ignition 2.22.0 Dec 16 13:12:35.507284 ignition[806]: Stage: fetch Dec 16 13:12:35.507622 ignition[806]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:35.507638 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:35.507871 ignition[806]: parsed url from cmdline: "" Dec 16 13:12:35.507979 ignition[806]: no config URL provided Dec 16 13:12:35.507992 ignition[806]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:12:35.517780 unknown[806]: fetched base config from "system" Dec 16 13:12:35.508006 ignition[806]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:12:35.517791 unknown[806]: fetched base config from "system" Dec 16 13:12:35.508041 ignition[806]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 16 13:12:35.517800 unknown[806]: fetched user config from "gcp" Dec 16 13:12:35.513153 ignition[806]: GET result: OK Dec 16 13:12:35.521585 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:12:35.513276 ignition[806]: parsing config with SHA512: 43db432927f857973398e284eee1c94b144ecf89dcbb39884a5c97aca8012beba1f57f9893630499e2adbfcbf627905e0c992e4be71c1722c54821a57e8736cc Dec 16 13:12:35.526077 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:12:35.518808 ignition[806]: fetch: fetch complete Dec 16 13:12:35.518814 ignition[806]: fetch: fetch passed Dec 16 13:12:35.518870 ignition[806]: Ignition finished successfully Dec 16 13:12:35.572436 ignition[813]: Ignition 2.22.0 Dec 16 13:12:35.572453 ignition[813]: Stage: kargs Dec 16 13:12:35.575783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:12:35.572667 ignition[813]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:35.580677 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:12:35.572684 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:35.573755 ignition[813]: kargs: kargs passed Dec 16 13:12:35.573809 ignition[813]: Ignition finished successfully Dec 16 13:12:35.622911 ignition[819]: Ignition 2.22.0 Dec 16 13:12:35.622927 ignition[819]: Stage: disks Dec 16 13:12:35.626718 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:12:35.623174 ignition[819]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:35.629613 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:12:35.623190 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:35.632523 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:12:35.625073 ignition[819]: disks: disks passed Dec 16 13:12:35.636512 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:12:35.625153 ignition[819]: Ignition finished successfully Dec 16 13:12:35.640491 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:12:35.644485 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:12:35.650581 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:12:35.694976 systemd-fsck[827]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 13:12:35.704841 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:12:35.713002 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:12:35.891250 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:12:35.892529 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:12:35.896997 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:12:35.902359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:12:35.908863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:12:35.916925 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:12:35.917011 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:12:35.917052 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:12:35.926973 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:12:35.936508 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (835) Dec 16 13:12:35.936547 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:35.936571 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:35.937251 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:12:35.945473 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:12:35.945515 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:12:35.945531 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:12:35.948287 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:12:36.053133 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:12:36.063430 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:12:36.069765 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:12:36.075992 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:12:36.221545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:12:36.228864 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:12:36.234786 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:12:36.261403 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:12:36.265504 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:36.298312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:12:36.305472 ignition[948]: INFO : Ignition 2.22.0 Dec 16 13:12:36.305472 ignition[948]: INFO : Stage: mount Dec 16 13:12:36.314381 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:36.314381 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:36.314381 ignition[948]: INFO : mount: mount passed Dec 16 13:12:36.314381 ignition[948]: INFO : Ignition finished successfully Dec 16 13:12:36.308660 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:12:36.312672 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:12:36.342148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:12:36.370265 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (960) Dec 16 13:12:36.373069 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:12:36.373124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:12:36.378801 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:12:36.378874 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:12:36.378911 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:12:36.382314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:12:36.424208 ignition[977]: INFO : Ignition 2.22.0 Dec 16 13:12:36.424208 ignition[977]: INFO : Stage: files Dec 16 13:12:36.430375 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:36.430375 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:36.430375 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:12:36.430375 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:12:36.430375 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:12:36.445334 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:12:36.445334 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:12:36.445334 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:12:36.445334 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:12:36.445334 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:12:36.436021 unknown[977]: wrote ssh authorized keys file for user: core Dec 16 13:12:36.606509 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:12:36.862407 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:12:36.862407 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:12:36.870364 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:12:37.065364 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:12:37.222381 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:12:37.222381 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:37.230358 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:12:37.405458 systemd-networkd[797]: eth0: Gained IPv6LL Dec 16 13:12:37.534660 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:12:37.944938 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:12:37.944938 ignition[977]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:12:37.953359 ignition[977]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:12:37.953359 ignition[977]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:12:37.953359 ignition[977]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:12:37.953359 ignition[977]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:12:37.953359 ignition[977]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:12:37.953359 ignition[977]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:12:37.953359 ignition[977]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:12:37.953359 ignition[977]: INFO : files: files passed Dec 16 13:12:37.953359 ignition[977]: INFO : Ignition finished successfully Dec 16 13:12:37.953246 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:12:37.955484 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:12:37.967867 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:12:38.011481 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:12:38.011481 initrd-setup-root-after-ignition[1006]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:12:37.985652 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:12:38.020480 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:12:37.986829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:12:37.999160 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:12:38.002511 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:12:38.009525 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:12:38.072609 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:12:38.072780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:12:38.077873 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:12:38.080573 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:12:38.084654 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:12:38.086562 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:12:38.116325 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:12:38.118897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:12:38.147105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:12:38.147496 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:12:38.151770 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:12:38.155758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:12:38.156179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:12:38.163622 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:12:38.167775 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:12:38.171775 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:12:38.175786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:12:38.179741 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:12:38.183771 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:12:38.187905 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:12:38.191783 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:12:38.196005 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:12:38.200798 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:12:38.204835 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:12:38.208772 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:12:38.209404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:12:38.219361 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:12:38.219766 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:12:38.223789 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:12:38.224213 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:12:38.228783 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:12:38.229407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:12:38.241394 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:12:38.241828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:12:38.244734 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:12:38.244933 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:12:38.251929 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:12:38.264720 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:12:38.265028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:12:38.274378 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:12:38.278434 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:12:38.278737 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:12:38.282131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:12:38.282651 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:12:38.303574 ignition[1031]: INFO : Ignition 2.22.0 Dec 16 13:12:38.303574 ignition[1031]: INFO : Stage: umount Dec 16 13:12:38.303574 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:12:38.303574 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:12:38.315612 ignition[1031]: INFO : umount: umount passed Dec 16 13:12:38.315612 ignition[1031]: INFO : Ignition finished successfully Dec 16 13:12:38.306298 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:12:38.306457 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:12:38.308197 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:12:38.308643 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:12:38.323912 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:12:38.324650 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:12:38.324728 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:12:38.329462 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:12:38.329540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:12:38.336433 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:12:38.336514 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:12:38.342398 systemd[1]: Stopped target network.target - Network. Dec 16 13:12:38.346332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:12:38.346423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:12:38.350368 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:12:38.354333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:12:38.358409 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:12:38.360521 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:12:38.364568 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:12:38.368545 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:12:38.368703 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:12:38.372524 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:12:38.372676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:12:38.376543 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:12:38.376720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:12:38.380658 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:12:38.380731 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:12:38.384947 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:12:38.389861 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:12:38.390699 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:12:38.390938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:12:38.400764 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:12:38.401076 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:12:38.401194 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:12:38.407954 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:12:38.408294 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:12:38.408414 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:12:38.414067 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:12:38.419395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:12:38.419468 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:12:38.422518 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:12:38.422688 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:12:38.428746 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:12:38.437295 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:12:38.437384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:12:38.441369 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:12:38.441444 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:38.444553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:12:38.444720 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:12:38.449483 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:12:38.449550 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:12:38.460298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:12:38.465147 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:12:38.465237 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:38.476681 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:12:38.477082 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:12:38.482326 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:12:38.482477 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:12:38.492599 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:12:38.492717 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:12:38.498397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:12:38.498461 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:12:38.504373 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:12:38.504452 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:12:38.512334 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:12:38.512421 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:12:38.519338 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:12:38.519429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:12:38.527570 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:12:38.533483 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:12:38.533555 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:12:38.540769 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:12:38.540934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:12:38.551385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:12:38.551463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:38.559111 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:12:38.626335 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Dec 16 13:12:38.559172 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:12:38.559216 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:12:38.559749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:12:38.559853 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:12:38.563892 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:12:38.568349 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:12:38.596663 systemd[1]: Switching root. Dec 16 13:12:38.646335 systemd-journald[191]: Journal stopped Dec 16 13:12:40.614971 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:12:40.615026 kernel: SELinux: policy capability open_perms=1 Dec 16 13:12:40.615054 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:12:40.615072 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:12:40.615089 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:12:40.615106 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:12:40.615126 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:12:40.615142 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:12:40.615162 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:12:40.615180 kernel: audit: type=1403 audit(1765890759.326:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:12:40.615200 systemd[1]: Successfully loaded SELinux policy in 69.599ms. Dec 16 13:12:40.615240 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.005ms. Dec 16 13:12:40.615261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:12:40.615282 systemd[1]: Detected virtualization google. Dec 16 13:12:40.615309 systemd[1]: Detected architecture x86-64. Dec 16 13:12:40.615330 systemd[1]: Detected first boot. Dec 16 13:12:40.615352 systemd[1]: Initializing machine ID from random generator. Dec 16 13:12:40.615374 zram_generator::config[1074]: No configuration found. Dec 16 13:12:40.615396 kernel: Guest personality initialized and is inactive Dec 16 13:12:40.615416 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:12:40.615441 kernel: Initialized host personality Dec 16 13:12:40.615461 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:12:40.615482 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:12:40.615504 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:12:40.615525 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:12:40.615548 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:12:40.615569 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:12:40.615594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:12:40.615617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:12:40.615651 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:12:40.615672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:12:40.615693 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:12:40.615715 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:12:40.615736 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:12:40.615761 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:12:40.615783 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:12:40.615807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:12:40.615828 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:12:40.615849 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:12:40.615870 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:12:40.615898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:12:40.615921 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:12:40.615943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:12:40.615969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:12:40.615990 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:12:40.616013 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:12:40.616035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:12:40.616058 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:12:40.616080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:12:40.616102 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:12:40.616128 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:12:40.616150 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:12:40.616173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:12:40.616195 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:12:40.616217 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:12:40.616270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:12:40.616298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:12:40.616323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:12:40.616346 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:12:40.616368 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:12:40.616390 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:12:40.616414 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:12:40.616435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:40.616460 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:12:40.616483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:12:40.616506 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:12:40.616529 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:12:40.616552 systemd[1]: Reached target machines.target - Containers. Dec 16 13:12:40.616573 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:12:40.616595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:40.616618 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:12:40.616653 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:12:40.616675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:12:40.616698 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:12:40.616719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:12:40.616742 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:12:40.616764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:12:40.616786 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:12:40.616812 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:12:40.616834 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:12:40.616860 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:12:40.616882 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:12:40.616906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:40.616929 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:12:40.616951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:12:40.616973 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:12:40.616995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:12:40.617018 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:12:40.617043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:12:40.617065 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:12:40.617085 systemd[1]: Stopped verity-setup.service. Dec 16 13:12:40.617105 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:40.617125 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:12:40.617184 systemd-journald[1145]: Collecting audit messages is disabled. Dec 16 13:12:40.617252 systemd-journald[1145]: Journal started Dec 16 13:12:40.617293 systemd-journald[1145]: Runtime Journal (/run/log/journal/378a39c68a5e47058f94291a83b2aa85) is 8M, max 148.6M, 140.6M free. Dec 16 13:12:40.181537 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:12:40.201009 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:12:40.201760 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:12:40.625894 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:12:40.628285 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:12:40.632202 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:12:40.636714 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:12:40.638375 kernel: loop: module loaded Dec 16 13:12:40.640507 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:12:40.643486 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:12:40.649581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:12:40.650684 kernel: fuse: init (API version 7.41) Dec 16 13:12:40.652808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:12:40.656098 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:12:40.662926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:12:40.665557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:12:40.669737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:12:40.669999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:12:40.673732 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:12:40.675359 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:12:40.678707 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:12:40.678990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:12:40.683996 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:12:40.687721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:12:40.690764 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:12:40.721316 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:12:40.727638 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:12:40.735445 kernel: ACPI: bus type drm_connector registered Dec 16 13:12:40.735352 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:12:40.737342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:12:40.737390 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:12:40.744130 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:12:40.750407 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:12:40.754550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:40.758062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:12:40.764482 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:12:40.767375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:12:40.770594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:12:40.773421 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:12:40.778953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:12:40.787804 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:12:40.794523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:12:40.797880 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:12:40.798857 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:12:40.802356 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:12:40.805599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:12:40.808462 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:12:40.827648 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:12:40.852295 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:12:40.871163 systemd-journald[1145]: Time spent on flushing to /var/log/journal/378a39c68a5e47058f94291a83b2aa85 is 159.085ms for 963 entries. Dec 16 13:12:40.871163 systemd-journald[1145]: System Journal (/var/log/journal/378a39c68a5e47058f94291a83b2aa85) is 8M, max 584.8M, 576.8M free. Dec 16 13:12:41.074053 systemd-journald[1145]: Received client request to flush runtime journal. Dec 16 13:12:41.074628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:12:41.074669 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:12:40.880437 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:12:40.883184 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:12:40.891731 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:12:40.902774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:12:41.009028 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:12:41.016529 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:12:41.023042 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:12:41.083697 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:12:41.095496 kernel: loop2: detected capacity change from 0 to 219144 Dec 16 13:12:41.134849 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Dec 16 13:12:41.134887 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Dec 16 13:12:41.141958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:12:41.151873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:12:41.182765 kernel: loop3: detected capacity change from 0 to 50736 Dec 16 13:12:41.209430 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:12:41.261275 kernel: loop4: detected capacity change from 0 to 110984 Dec 16 13:12:41.294694 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:12:41.329340 kernel: loop6: detected capacity change from 0 to 219144 Dec 16 13:12:41.367257 kernel: loop7: detected capacity change from 0 to 50736 Dec 16 13:12:41.402907 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 16 13:12:41.403940 (sd-merge)[1220]: Merged extensions into '/usr'. Dec 16 13:12:41.417281 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:12:41.417448 systemd[1]: Reloading... Dec 16 13:12:41.582394 zram_generator::config[1242]: No configuration found. Dec 16 13:12:41.971195 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:12:42.113308 systemd[1]: Reloading finished in 694 ms. Dec 16 13:12:42.130429 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:12:42.134012 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:12:42.149991 systemd[1]: Starting ensure-sysext.service... Dec 16 13:12:42.154268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:12:42.189974 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:12:42.190137 systemd[1]: Reloading... Dec 16 13:12:42.221428 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:12:42.222683 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:12:42.225287 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:12:42.225907 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:12:42.230216 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:12:42.230995 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Dec 16 13:12:42.231299 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Dec 16 13:12:42.251658 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:12:42.251855 systemd-tmpfiles[1287]: Skipping /boot Dec 16 13:12:42.292408 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:12:42.292585 systemd-tmpfiles[1287]: Skipping /boot Dec 16 13:12:42.333263 zram_generator::config[1313]: No configuration found. Dec 16 13:12:42.571878 systemd[1]: Reloading finished in 380 ms. Dec 16 13:12:42.593833 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:12:42.611994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:12:42.627039 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:12:42.634930 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:12:42.640731 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:12:42.648275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:12:42.654789 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:12:42.661077 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:12:42.675137 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:42.676204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:42.684551 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:12:42.690287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:12:42.700929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:12:42.702683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:42.702913 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:42.707630 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:12:42.710326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:42.716404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:42.717095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:42.717468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:42.717637 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:42.717813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:42.724903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:42.725551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:12:42.731933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:12:42.737654 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:12:42.740483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:12:42.741020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:12:42.741847 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:12:42.746136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:12:42.762078 systemd[1]: Finished ensure-sysext.service. Dec 16 13:12:42.772735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:12:42.796879 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Dec 16 13:12:42.801961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:12:42.802884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:12:42.811786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:12:42.815989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:12:42.822559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:12:42.843622 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:12:42.848216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:12:42.852808 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:12:42.856900 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:12:42.857460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:12:42.859877 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:12:42.867499 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 16 13:12:42.870381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:12:42.873448 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:12:42.899865 augenrules[1399]: No rules Dec 16 13:12:42.903062 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:12:42.907035 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:12:42.907381 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:12:42.923546 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:12:42.932851 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:12:42.936762 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:12:42.937482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:12:42.945577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:12:42.989482 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 16 13:12:43.103132 systemd-resolved[1359]: Positive Trust Anchors: Dec 16 13:12:43.103164 systemd-resolved[1359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:12:43.103247 systemd-resolved[1359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:12:43.119573 systemd-resolved[1359]: Defaulting to hostname 'linux'. Dec 16 13:12:43.124347 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:12:43.127485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:12:43.130370 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:12:43.133497 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:12:43.136421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:12:43.139338 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:12:43.142567 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:12:43.145606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:12:43.149352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:12:43.152364 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:12:43.152428 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:12:43.155347 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:12:43.161446 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:12:43.168310 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:12:43.180174 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:12:43.183579 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:12:43.186884 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:12:43.198950 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:12:43.202697 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:12:43.206595 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:12:43.214066 systemd-networkd[1419]: lo: Link UP Dec 16 13:12:43.214550 systemd-networkd[1419]: lo: Gained carrier Dec 16 13:12:43.215888 systemd-networkd[1419]: Enumeration completed Dec 16 13:12:43.219563 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:12:43.222799 systemd[1]: Reached target network.target - Network. Dec 16 13:12:43.225360 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:12:43.228351 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:12:43.231438 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:12:43.231493 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:12:43.234899 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:12:43.241640 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:12:43.247128 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:12:43.254518 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:12:43.260535 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:12:43.265530 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:12:43.268356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:12:43.270502 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:12:43.282841 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:12:43.295803 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:12:43.303509 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:12:43.318457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:12:43.327161 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:12:43.360304 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:12:43.374547 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:12:43.380486 extend-filesystems[1461]: Found /dev/sda6 Dec 16 13:12:43.392998 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:12:43.409544 jq[1460]: false Dec 16 13:12:43.406827 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 16 13:12:43.407979 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:12:43.412316 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:12:43.423450 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:12:43.431047 extend-filesystems[1461]: Found /dev/sda9 Dec 16 13:12:43.442391 extend-filesystems[1461]: Checking size of /dev/sda9 Dec 16 13:12:43.449363 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Refreshing passwd entry cache Dec 16 13:12:43.445487 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:12:43.442992 oslogin_cache_refresh[1462]: Refreshing passwd entry cache Dec 16 13:12:43.461000 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:12:43.462545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:12:43.467691 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Failure getting users, quitting Dec 16 13:12:43.467764 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:12:43.467688 oslogin_cache_refresh[1462]: Failure getting users, quitting Dec 16 13:12:43.467872 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Refreshing group entry cache Dec 16 13:12:43.467717 oslogin_cache_refresh[1462]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:12:43.467776 oslogin_cache_refresh[1462]: Refreshing group entry cache Dec 16 13:12:43.477379 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Failure getting groups, quitting Dec 16 13:12:43.477373 oslogin_cache_refresh[1462]: Failure getting groups, quitting Dec 16 13:12:43.477528 google_oslogin_nss_cache[1462]: oslogin_cache_refresh[1462]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:12:43.477392 oslogin_cache_refresh[1462]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:12:43.486000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:12:43.487389 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:12:43.496542 jq[1485]: true Dec 16 13:12:43.498113 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:12:43.499645 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:12:43.529077 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:12:43.535861 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Dec 16 13:12:43.538501 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:12:43.538701 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Dec 16 13:12:43.558293 jq[1501]: true Dec 16 13:12:43.568124 extend-filesystems[1461]: Resized partition /dev/sda9 Dec 16 13:12:43.580329 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:43.580341 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:12:43.582121 systemd-networkd[1419]: eth0: Link UP Dec 16 13:12:43.582469 systemd-networkd[1419]: eth0: Gained carrier Dec 16 13:12:43.582511 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:12:43.602841 extend-filesystems[1512]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:12:43.598378 systemd-networkd[1419]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 13:12:43.642340 update_engine[1479]: I20251216 13:12:43.641084 1479 main.cc:92] Flatcar Update Engine starting Dec 16 13:12:43.641909 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:12:43.656290 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Dec 16 13:12:43.644337 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:12:43.746367 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 13:12:43.758884 coreos-metadata[1457]: Dec 16 13:12:43.758 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 16 13:12:43.767626 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:12:43.770412 coreos-metadata[1457]: Dec 16 13:12:43.767 INFO Fetch successful Dec 16 13:12:43.770412 coreos-metadata[1457]: Dec 16 13:12:43.768 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 16 13:12:43.772364 coreos-metadata[1457]: Dec 16 13:12:43.772 INFO Fetch successful Dec 16 13:12:43.772364 coreos-metadata[1457]: Dec 16 13:12:43.772 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 16 13:12:43.774540 coreos-metadata[1457]: Dec 16 13:12:43.774 INFO Fetch successful Dec 16 13:12:43.774540 coreos-metadata[1457]: Dec 16 13:12:43.774 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 16 13:12:43.782111 coreos-metadata[1457]: Dec 16 13:12:43.780 INFO Fetch successful Dec 16 13:12:43.787254 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Dec 16 13:12:43.803290 extend-filesystems[1512]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:12:43.803290 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:12:43.803290 extend-filesystems[1512]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Dec 16 13:12:43.833627 extend-filesystems[1461]: Resized filesystem in /dev/sda9 Dec 16 13:12:43.823970 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:12:43.850876 tar[1489]: linux-amd64/LICENSE Dec 16 13:12:43.850876 tar[1489]: linux-amd64/helm Dec 16 13:12:43.826833 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:12:43.844306 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:12:43.880788 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:12:43.880866 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 13:12:43.901831 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:12:43.901932 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 16 13:12:43.901980 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:12:43.943250 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:12:43.953988 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:12:43.959441 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:12:43.982005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:12:44.008860 systemd[1]: Starting sshkeys.service... Dec 16 13:12:44.065325 dbus-daemon[1458]: [system] SELinux support is enabled Dec 16 13:12:44.065568 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:12:44.082253 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:12:44.101309 dbus-daemon[1458]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1419 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:12:44.101449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:12:44.101736 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:12:44.102611 update_engine[1479]: I20251216 13:12:44.102382 1479 update_check_scheduler.cc:74] Next update check in 5m5s Dec 16 13:12:44.112500 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:12:44.112731 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:12:44.117021 ntpd[1464]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: ---------------------------------------------------- Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: corporation. Support and training for ntp-4 are Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: available at https://www.nwtime.org/support Dec 16 13:12:44.121632 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: ---------------------------------------------------- Dec 16 13:12:44.117116 ntpd[1464]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:12:44.119928 ntpd[1464]: ---------------------------------------------------- Dec 16 13:12:44.119955 ntpd[1464]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:12:44.119970 ntpd[1464]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:12:44.119983 ntpd[1464]: corporation. Support and training for ntp-4 are Dec 16 13:12:44.119997 ntpd[1464]: available at https://www.nwtime.org/support Dec 16 13:12:44.120011 ntpd[1464]: ---------------------------------------------------- Dec 16 13:12:44.133399 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:12:44.139350 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: proto: precision = 0.099 usec (-23) Dec 16 13:12:44.137117 ntpd[1464]: proto: precision = 0.099 usec (-23) Dec 16 13:12:44.133539 systemd-logind[1471]: New seat seat0. Dec 16 13:12:44.182264 kernel: ntpd[1464]: segfault at 24 ip 000055d04890caeb sp 00007ffeb81a4050 error 4 in ntpd[68aeb,55d0488aa000+80000] likely on CPU 0 (core 0, socket 0) Dec 16 13:12:44.182355 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:12:44.142598 ntpd[1464]: basedate set to 2025-11-30 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: basedate set to 2025-11-30 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: gps base set to 2025-11-30 (week 2395) Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Listen normally on 3 eth0 10.128.0.4:123 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: Listen normally on 4 lo [::1]:123 Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:12:44.182544 ntpd[1464]: 16 Dec 13:12:44 ntpd[1464]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4%2]:123 Dec 16 13:12:44.142638 ntpd[1464]: gps base set to 2025-11-30 (week 2395) Dec 16 13:12:44.142809 ntpd[1464]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:12:44.142850 ntpd[1464]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:12:44.143088 ntpd[1464]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:12:44.143127 ntpd[1464]: Listen normally on 3 eth0 10.128.0.4:123 Dec 16 13:12:44.143171 ntpd[1464]: Listen normally on 4 lo [::1]:123 Dec 16 13:12:44.143212 ntpd[1464]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:12:44.144180 ntpd[1464]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4%2]:123 Dec 16 13:12:44.179728 dbus-daemon[1458]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:12:44.203742 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:12:44.213824 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:12:44.223538 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:12:44.233783 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:12:44.261892 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:12:44.288475 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:12:44.355371 systemd-coredump[1565]: Process 1464 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:12:44.396082 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:12:44.431526 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:12:44.433330 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 13:12:44.475607 systemd[1]: Started systemd-coredump@0-1565-0.service - Process Core Dump (PID 1565/UID 0). Dec 16 13:12:44.516261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:12:44.590715 coreos-metadata[1554]: Dec 16 13:12:44.590 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 16 13:12:44.592679 coreos-metadata[1554]: Dec 16 13:12:44.591 INFO Fetch failed with 404: resource not found Dec 16 13:12:44.592679 coreos-metadata[1554]: Dec 16 13:12:44.592 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 16 13:12:44.597437 coreos-metadata[1554]: Dec 16 13:12:44.597 INFO Fetch successful Dec 16 13:12:44.597437 coreos-metadata[1554]: Dec 16 13:12:44.597 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 16 13:12:44.603398 coreos-metadata[1554]: Dec 16 13:12:44.599 INFO Fetch failed with 404: resource not found Dec 16 13:12:44.603398 coreos-metadata[1554]: Dec 16 13:12:44.601 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 16 13:12:44.604136 coreos-metadata[1554]: Dec 16 13:12:44.604 INFO Fetch failed with 404: resource not found Dec 16 13:12:44.604136 coreos-metadata[1554]: Dec 16 13:12:44.604 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 16 13:12:44.608596 coreos-metadata[1554]: Dec 16 13:12:44.608 INFO Fetch successful Dec 16 13:12:44.610963 unknown[1554]: wrote ssh authorized keys file for user: core Dec 16 13:12:44.613408 systemd-logind[1471]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:12:44.667218 systemd-logind[1471]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:12:44.699884 update-ssh-keys[1574]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:12:44.698075 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:12:44.701685 systemd[1]: Finished sshkeys.service. Dec 16 13:12:44.891325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:12:44.974291 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:12:44.977594 dbus-daemon[1458]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:12:44.982293 dbus-daemon[1458]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1557 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:12:44.992924 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:12:45.040855 containerd[1496]: time="2025-12-16T13:12:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:12:45.047060 containerd[1496]: time="2025-12-16T13:12:45.047015306Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:12:45.060966 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:12:45.074483 systemd-coredump[1569]: Process 1464 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1464: #0 0x000055d04890caeb n/a (ntpd + 0x68aeb) #1 0x000055d0488b5cdf n/a (ntpd + 0x11cdf) #2 0x000055d0488b6575 n/a (ntpd + 0x12575) #3 0x000055d0488b1d8a n/a (ntpd + 0xdd8a) #4 0x000055d0488b35d3 n/a (ntpd + 0xf5d3) #5 0x000055d0488bbfd1 n/a (ntpd + 0x17fd1) #6 0x000055d0488acc2d n/a (ntpd + 0x8c2d) #7 0x00007f0761e5316c n/a (libc.so.6 + 0x2716c) #8 0x00007f0761e53229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055d0488acc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:12:45.083181 systemd[1]: systemd-coredump@0-1565-0.service: Deactivated successfully. Dec 16 13:12:45.091175 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:12:45.091939 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:12:45.128925 containerd[1496]: time="2025-12-16T13:12:45.128215648Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.694µs" Dec 16 13:12:45.128925 containerd[1496]: time="2025-12-16T13:12:45.128619311Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:12:45.128925 containerd[1496]: time="2025-12-16T13:12:45.128651262Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:12:45.128925 containerd[1496]: time="2025-12-16T13:12:45.128876359Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:12:45.129933 containerd[1496]: time="2025-12-16T13:12:45.129401626Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:12:45.129933 containerd[1496]: time="2025-12-16T13:12:45.129477535Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130187246Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130235918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130567537Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130592692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130613667Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130629542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.130756310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.131065733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.131116349Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.131134534Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:12:45.131680 containerd[1496]: time="2025-12-16T13:12:45.131184324Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:12:45.134373 containerd[1496]: time="2025-12-16T13:12:45.134341602Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:12:45.134586 containerd[1496]: time="2025-12-16T13:12:45.134563910Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:12:45.140390 containerd[1496]: time="2025-12-16T13:12:45.140355098Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140575386Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140610336Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140646148Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140671260Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140694826Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140718015Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140739600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140761130Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140780449Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140798928Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140821783Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.140984159Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.141013595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:12:45.142488 containerd[1496]: time="2025-12-16T13:12:45.141043072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141067844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141101194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141124274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141144838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141163744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141185927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141212107Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141264644Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141337870Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141358811Z" level=info msg="Start snapshots syncer" Dec 16 13:12:45.143152 containerd[1496]: time="2025-12-16T13:12:45.141395073Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:12:45.143699 containerd[1496]: time="2025-12-16T13:12:45.141821025Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:12:45.143699 containerd[1496]: time="2025-12-16T13:12:45.141902305Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:12:45.150313 containerd[1496]: time="2025-12-16T13:12:45.149638805Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:12:45.150313 containerd[1496]: time="2025-12-16T13:12:45.150087761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:12:45.150313 containerd[1496]: time="2025-12-16T13:12:45.150131662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:12:45.150313 containerd[1496]: time="2025-12-16T13:12:45.150155560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:12:45.150313 containerd[1496]: time="2025-12-16T13:12:45.150174976Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:12:45.150836 containerd[1496]: time="2025-12-16T13:12:45.150209844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:12:45.152093 containerd[1496]: time="2025-12-16T13:12:45.151011160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:12:45.152093 containerd[1496]: time="2025-12-16T13:12:45.151048225Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:12:45.152093 containerd[1496]: time="2025-12-16T13:12:45.151093152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:12:45.152093 containerd[1496]: time="2025-12-16T13:12:45.151816366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:12:45.152093 containerd[1496]: time="2025-12-16T13:12:45.151846227Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:12:45.153849 containerd[1496]: time="2025-12-16T13:12:45.153814249Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:12:45.154004 containerd[1496]: time="2025-12-16T13:12:45.153977489Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:12:45.154202 containerd[1496]: time="2025-12-16T13:12:45.154173477Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154414524Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154443133Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154464309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154497603Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154546831Z" level=info msg="runtime interface created" Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154559738Z" level=info msg="created NRI interface" Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154576155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154601624Z" level=info msg="Connect containerd service" Dec 16 13:12:45.154895 containerd[1496]: time="2025-12-16T13:12:45.154639635Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:12:45.166834 containerd[1496]: time="2025-12-16T13:12:45.165601503Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:12:45.320653 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 13:12:45.327363 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:12:45.345130 polkitd[1585]: Started polkitd version 126 Dec 16 13:12:45.356958 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:12:45.360572 polkitd[1585]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:12:45.361420 polkitd[1585]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:12:45.361600 polkitd[1585]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:12:45.362344 polkitd[1585]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:12:45.362532 polkitd[1585]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:12:45.362691 polkitd[1585]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:12:45.370768 polkitd[1585]: Finished loading, compiling and executing 2 rules Dec 16 13:12:45.371166 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:12:45.373038 dbus-daemon[1458]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:12:45.376994 polkitd[1585]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:12:45.415473 ntpd[1606]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: ---------------------------------------------------- Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: corporation. Support and training for ntp-4 are Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: available at https://www.nwtime.org/support Dec 16 13:12:45.418256 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: ---------------------------------------------------- Dec 16 13:12:45.416197 ntpd[1606]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:12:45.416214 ntpd[1606]: ---------------------------------------------------- Dec 16 13:12:45.417253 ntpd[1606]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:12:45.417278 ntpd[1606]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:12:45.417292 ntpd[1606]: corporation. Support and training for ntp-4 are Dec 16 13:12:45.417306 ntpd[1606]: available at https://www.nwtime.org/support Dec 16 13:12:45.417320 ntpd[1606]: ---------------------------------------------------- Dec 16 13:12:45.421057 ntpd[1606]: proto: precision = 0.110 usec (-23) Dec 16 13:12:45.422384 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: proto: precision = 0.110 usec (-23) Dec 16 13:12:45.459583 kernel: ntpd[1606]: segfault at 24 ip 0000559934564aeb sp 00007ffd49b1af60 error 4 in ntpd[68aeb,559934502000+80000] likely on CPU 0 (core 0, socket 0) Dec 16 13:12:45.459737 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: basedate set to 2025-11-30 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: gps base set to 2025-11-30 (week 2395) Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Listen normally on 3 eth0 10.128.0.4:123 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: Listen normally on 4 lo [::1]:123 Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:12:45.459779 ntpd[1606]: 16 Dec 13:12:45 ntpd[1606]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4%2]:123 Dec 16 13:12:45.422640 ntpd[1606]: basedate set to 2025-11-30 Dec 16 13:12:45.459395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:12:45.422666 ntpd[1606]: gps base set to 2025-11-30 (week 2395) Dec 16 13:12:45.422812 ntpd[1606]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:12:45.422852 ntpd[1606]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:12:45.423087 ntpd[1606]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:12:45.423123 ntpd[1606]: Listen normally on 3 eth0 10.128.0.4:123 Dec 16 13:12:45.423162 ntpd[1606]: Listen normally on 4 lo [::1]:123 Dec 16 13:12:45.423201 ntpd[1606]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:12:45.423250 ntpd[1606]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4%2]:123 Dec 16 13:12:45.469418 systemd-networkd[1419]: eth0: Gained IPv6LL Dec 16 13:12:45.478657 systemd-coredump[1622]: Process 1606 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:12:45.482202 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:12:45.491400 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:12:45.516720 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:12:45.532332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:45.546363 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:12:45.560552 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 16 13:12:45.571076 containerd[1496]: time="2025-12-16T13:12:45.570953919Z" level=info msg="Start subscribing containerd event" Dec 16 13:12:45.572664 containerd[1496]: time="2025-12-16T13:12:45.572426978Z" level=info msg="Start recovering state" Dec 16 13:12:45.576862 systemd[1]: Started systemd-coredump@1-1622-0.service - Process Core Dump (PID 1622/UID 0). Dec 16 13:12:45.578132 containerd[1496]: time="2025-12-16T13:12:45.578089945Z" level=info msg="Start event monitor" Dec 16 13:12:45.580173 containerd[1496]: time="2025-12-16T13:12:45.579301213Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:12:45.580173 containerd[1496]: time="2025-12-16T13:12:45.579339048Z" level=info msg="Start streaming server" Dec 16 13:12:45.580173 containerd[1496]: time="2025-12-16T13:12:45.579412446Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:12:45.580173 containerd[1496]: time="2025-12-16T13:12:45.579366682Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:12:45.580173 containerd[1496]: time="2025-12-16T13:12:45.579906532Z" level=info msg="runtime interface starting up..." Dec 16 13:12:45.580173 containerd[1496]: time="2025-12-16T13:12:45.579933129Z" level=info msg="starting plugins..." Dec 16 13:12:45.582101 containerd[1496]: time="2025-12-16T13:12:45.581611422Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:12:45.582101 containerd[1496]: time="2025-12-16T13:12:45.581731800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:12:45.583076 containerd[1496]: time="2025-12-16T13:12:45.582955699Z" level=info msg="containerd successfully booted in 0.546230s" Dec 16 13:12:45.591582 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:12:45.602758 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:12:45.604346 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:12:45.619167 init.sh[1634]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 16 13:12:45.619167 init.sh[1634]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 16 13:12:45.626261 init.sh[1634]: + /usr/bin/google_instance_setup Dec 16 13:12:45.632529 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:12:45.638711 systemd-hostnamed[1557]: Hostname set to (transient) Dec 16 13:12:45.644997 systemd-resolved[1359]: System hostname changed to 'ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal'. Dec 16 13:12:45.664692 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:12:45.707990 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:12:45.723206 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:12:45.737104 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:12:45.746683 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:12:45.850879 systemd-coredump[1635]: Process 1606 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1606: #0 0x0000559934564aeb n/a (ntpd + 0x68aeb) #1 0x000055993450dcdf n/a (ntpd + 0x11cdf) #2 0x000055993450e575 n/a (ntpd + 0x12575) #3 0x0000559934509d8a n/a (ntpd + 0xdd8a) #4 0x000055993450b5d3 n/a (ntpd + 0xf5d3) #5 0x0000559934513fd1 n/a (ntpd + 0x17fd1) #6 0x0000559934504c2d n/a (ntpd + 0x8c2d) #7 0x00007f847982f16c n/a (libc.so.6 + 0x2716c) #8 0x00007f847982f229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000559934504c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:12:45.857623 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:12:45.858295 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:12:45.866046 systemd[1]: systemd-coredump@1-1622-0.service: Deactivated successfully. Dec 16 13:12:45.934621 tar[1489]: linux-amd64/README.md Dec 16 13:12:45.958835 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:12:45.968959 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 16 13:12:45.973717 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:12:46.015133 ntpd[1664]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: ---------------------------------------------------- Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: corporation. Support and training for ntp-4 are Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: available at https://www.nwtime.org/support Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: ---------------------------------------------------- Dec 16 13:12:46.016285 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: proto: precision = 0.084 usec (-23) Dec 16 13:12:46.015242 ntpd[1664]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: basedate set to 2025-11-30 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: gps base set to 2025-11-30 (week 2395) Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listen normally on 3 eth0 10.128.0.4:123 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listen normally on 4 lo [::1]:123 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4%2]:123 Dec 16 13:12:46.017152 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: Listening on routing socket on fd #22 for interface updates Dec 16 13:12:46.015259 ntpd[1664]: ---------------------------------------------------- Dec 16 13:12:46.019640 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:12:46.019640 ntpd[1664]: 16 Dec 13:12:46 ntpd[1664]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:12:46.015273 ntpd[1664]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:12:46.015286 ntpd[1664]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:12:46.015299 ntpd[1664]: corporation. Support and training for ntp-4 are Dec 16 13:12:46.015312 ntpd[1664]: available at https://www.nwtime.org/support Dec 16 13:12:46.015325 ntpd[1664]: ---------------------------------------------------- Dec 16 13:12:46.016172 ntpd[1664]: proto: precision = 0.084 usec (-23) Dec 16 13:12:46.016526 ntpd[1664]: basedate set to 2025-11-30 Dec 16 13:12:46.016546 ntpd[1664]: gps base set to 2025-11-30 (week 2395) Dec 16 13:12:46.016657 ntpd[1664]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:12:46.016696 ntpd[1664]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:12:46.016918 ntpd[1664]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:12:46.016959 ntpd[1664]: Listen normally on 3 eth0 10.128.0.4:123 Dec 16 13:12:46.017003 ntpd[1664]: Listen normally on 4 lo [::1]:123 Dec 16 13:12:46.017046 ntpd[1664]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4%2]:123 Dec 16 13:12:46.017086 ntpd[1664]: Listening on routing socket on fd #22 for interface updates Dec 16 13:12:46.019464 ntpd[1664]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:12:46.019505 ntpd[1664]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:12:46.338979 instance-setup[1642]: INFO Running google_set_multiqueue. Dec 16 13:12:46.362643 instance-setup[1642]: INFO Set channels for eth0 to 2. Dec 16 13:12:46.367358 instance-setup[1642]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Dec 16 13:12:46.369119 instance-setup[1642]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Dec 16 13:12:46.369788 instance-setup[1642]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Dec 16 13:12:46.371701 instance-setup[1642]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Dec 16 13:12:46.372284 instance-setup[1642]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Dec 16 13:12:46.374216 instance-setup[1642]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Dec 16 13:12:46.374890 instance-setup[1642]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Dec 16 13:12:46.376735 instance-setup[1642]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Dec 16 13:12:46.386466 instance-setup[1642]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 13:12:46.393017 instance-setup[1642]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 13:12:46.396122 instance-setup[1642]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 16 13:12:46.396179 instance-setup[1642]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 16 13:12:46.418416 init.sh[1634]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 16 13:12:46.577819 startup-script[1697]: INFO Starting startup scripts. Dec 16 13:12:46.584047 startup-script[1697]: INFO No startup scripts found in metadata. Dec 16 13:12:46.584123 startup-script[1697]: INFO Finished running startup scripts. Dec 16 13:12:46.606792 init.sh[1634]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 16 13:12:46.606792 init.sh[1634]: + daemon_pids=() Dec 16 13:12:46.607189 init.sh[1634]: + for d in accounts clock_skew network Dec 16 13:12:46.607784 init.sh[1634]: + daemon_pids+=($!) Dec 16 13:12:46.607784 init.sh[1634]: + for d in accounts clock_skew network Dec 16 13:12:46.607915 init.sh[1700]: + /usr/bin/google_accounts_daemon Dec 16 13:12:46.608322 init.sh[1634]: + daemon_pids+=($!) Dec 16 13:12:46.608322 init.sh[1634]: + for d in accounts clock_skew network Dec 16 13:12:46.608322 init.sh[1634]: + daemon_pids+=($!) Dec 16 13:12:46.608447 init.sh[1634]: + NOTIFY_SOCKET=/run/systemd/notify Dec 16 13:12:46.608497 init.sh[1634]: + /usr/bin/systemd-notify --ready Dec 16 13:12:46.609247 init.sh[1701]: + /usr/bin/google_clock_skew_daemon Dec 16 13:12:46.609806 init.sh[1702]: + /usr/bin/google_network_daemon Dec 16 13:12:46.629337 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:12:46.643347 systemd[1]: Started sshd@0-10.128.0.4:22-139.178.68.195:49730.service - OpenSSH per-connection server daemon (139.178.68.195:49730). Dec 16 13:12:46.653982 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 16 13:12:46.672661 init.sh[1634]: + wait -n 1700 1701 1702 Dec 16 13:12:46.960461 google-networking[1702]: INFO Starting Google Networking daemon. Dec 16 13:12:47.051422 sshd[1705]: Accepted publickey for core from 139.178.68.195 port 49730 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:47.055063 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:47.072053 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:12:47.081060 google-clock-skew[1701]: INFO Starting Google Clock Skew daemon. Dec 16 13:12:47.086573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:12:47.089092 google-clock-skew[1701]: INFO Clock drift token has changed: 0. Dec 16 13:12:47.117358 systemd-logind[1471]: New session 1 of user core. Dec 16 13:12:47.131047 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:12:47.000804 systemd-resolved[1359]: Clock change detected. Flushing caches. Dec 16 13:12:47.025731 systemd-journald[1145]: Time jumped backwards, rotating. Dec 16 13:12:47.023006 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:12:47.001994 google-clock-skew[1701]: INFO Synced system time with hardware clock. Dec 16 13:12:47.065947 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:12:47.069579 groupadd[1717]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 16 13:12:47.071914 systemd-logind[1471]: New session c1 of user core. Dec 16 13:12:47.074804 groupadd[1717]: group added to /etc/gshadow: name=google-sudoers Dec 16 13:12:47.133471 groupadd[1717]: new group: name=google-sudoers, GID=1000 Dec 16 13:12:47.177376 google-accounts[1700]: INFO Starting Google Accounts daemon. Dec 16 13:12:47.202514 google-accounts[1700]: WARNING OS Login not installed. Dec 16 13:12:47.205631 google-accounts[1700]: INFO Creating a new user account for 0. Dec 16 13:12:47.213865 init.sh[1733]: useradd: invalid user name '0': use --badname to ignore Dec 16 13:12:47.214709 google-accounts[1700]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 16 13:12:47.343859 systemd[1719]: Queued start job for default target default.target. Dec 16 13:12:47.349857 systemd[1719]: Created slice app.slice - User Application Slice. Dec 16 13:12:47.349903 systemd[1719]: Reached target paths.target - Paths. Dec 16 13:12:47.349979 systemd[1719]: Reached target timers.target - Timers. Dec 16 13:12:47.353251 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:12:47.376091 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:12:47.376287 systemd[1719]: Reached target sockets.target - Sockets. Dec 16 13:12:47.376370 systemd[1719]: Reached target basic.target - Basic System. Dec 16 13:12:47.376448 systemd[1719]: Reached target default.target - Main User Target. Dec 16 13:12:47.376504 systemd[1719]: Startup finished in 288ms. Dec 16 13:12:47.377044 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:12:47.390830 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:12:47.465130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:47.477455 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:12:47.487149 systemd[1]: Startup finished in 4.057s (kernel) + 7.533s (initrd) + 8.361s (userspace) = 19.952s. Dec 16 13:12:47.489184 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:12:47.631095 systemd[1]: Started sshd@1-10.128.0.4:22-139.178.68.195:49736.service - OpenSSH per-connection server daemon (139.178.68.195:49736). Dec 16 13:12:47.949197 sshd[1750]: Accepted publickey for core from 139.178.68.195 port 49736 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:47.951231 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:47.958948 systemd-logind[1471]: New session 2 of user core. Dec 16 13:12:47.963834 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:12:48.170258 sshd[1757]: Connection closed by 139.178.68.195 port 49736 Dec 16 13:12:48.172182 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:48.178116 systemd[1]: sshd@1-10.128.0.4:22-139.178.68.195:49736.service: Deactivated successfully. Dec 16 13:12:48.182272 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:12:48.188182 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:12:48.192306 systemd-logind[1471]: Removed session 2. Dec 16 13:12:48.226541 systemd[1]: Started sshd@2-10.128.0.4:22-139.178.68.195:49738.service - OpenSSH per-connection server daemon (139.178.68.195:49738). Dec 16 13:12:48.278356 kubelet[1743]: E1216 13:12:48.278265 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:12:48.281027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:12:48.281260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:12:48.281983 systemd[1]: kubelet.service: Consumed 1.188s CPU time, 256.6M memory peak. Dec 16 13:12:48.536182 sshd[1764]: Accepted publickey for core from 139.178.68.195 port 49738 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:48.538122 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:48.544559 systemd-logind[1471]: New session 3 of user core. Dec 16 13:12:48.551813 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:12:48.747749 sshd[1768]: Connection closed by 139.178.68.195 port 49738 Dec 16 13:12:48.748548 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:48.754453 systemd[1]: sshd@2-10.128.0.4:22-139.178.68.195:49738.service: Deactivated successfully. Dec 16 13:12:48.756920 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:12:48.758075 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:12:48.760248 systemd-logind[1471]: Removed session 3. Dec 16 13:12:48.803951 systemd[1]: Started sshd@3-10.128.0.4:22-139.178.68.195:49754.service - OpenSSH per-connection server daemon (139.178.68.195:49754). Dec 16 13:12:49.112808 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 49754 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:49.114457 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:49.121680 systemd-logind[1471]: New session 4 of user core. Dec 16 13:12:49.126801 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:12:49.324757 sshd[1777]: Connection closed by 139.178.68.195 port 49754 Dec 16 13:12:49.325556 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:49.331239 systemd[1]: sshd@3-10.128.0.4:22-139.178.68.195:49754.service: Deactivated successfully. Dec 16 13:12:49.333747 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:12:49.335018 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:12:49.336959 systemd-logind[1471]: Removed session 4. Dec 16 13:12:49.382068 systemd[1]: Started sshd@4-10.128.0.4:22-139.178.68.195:49768.service - OpenSSH per-connection server daemon (139.178.68.195:49768). Dec 16 13:12:49.697027 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 49768 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:49.698703 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:49.704663 systemd-logind[1471]: New session 5 of user core. Dec 16 13:12:49.711788 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:12:49.890364 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:12:49.890889 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:49.904926 sudo[1787]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:49.947402 sshd[1786]: Connection closed by 139.178.68.195 port 49768 Dec 16 13:12:49.948422 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:49.953926 systemd[1]: sshd@4-10.128.0.4:22-139.178.68.195:49768.service: Deactivated successfully. Dec 16 13:12:49.956263 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:12:49.958884 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:12:49.960647 systemd-logind[1471]: Removed session 5. Dec 16 13:12:50.003966 systemd[1]: Started sshd@5-10.128.0.4:22-139.178.68.195:49782.service - OpenSSH per-connection server daemon (139.178.68.195:49782). Dec 16 13:12:50.311219 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 49782 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:50.313044 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:50.319672 systemd-logind[1471]: New session 6 of user core. Dec 16 13:12:50.328885 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:12:50.493250 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:12:50.493761 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:50.500761 sudo[1798]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:50.514105 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:12:50.514577 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:50.526896 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:12:50.571445 augenrules[1820]: No rules Dec 16 13:12:50.573457 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:12:50.573843 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:12:50.576851 sudo[1797]: pam_unix(sudo:session): session closed for user root Dec 16 13:12:50.620774 sshd[1796]: Connection closed by 139.178.68.195 port 49782 Dec 16 13:12:50.621786 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:50.628818 systemd[1]: sshd@5-10.128.0.4:22-139.178.68.195:49782.service: Deactivated successfully. Dec 16 13:12:50.632007 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:12:50.634431 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:12:50.638442 systemd-logind[1471]: Removed session 6. Dec 16 13:12:50.674843 systemd[1]: Started sshd@6-10.128.0.4:22-139.178.68.195:41658.service - OpenSSH per-connection server daemon (139.178.68.195:41658). Dec 16 13:12:50.982948 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 41658 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:12:50.984578 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:50.990676 systemd-logind[1471]: New session 7 of user core. Dec 16 13:12:50.997806 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:12:51.160293 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:12:51.160815 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:12:51.643853 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:12:51.673185 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:12:52.014695 dockerd[1851]: time="2025-12-16T13:12:52.013943694Z" level=info msg="Starting up" Dec 16 13:12:52.016399 dockerd[1851]: time="2025-12-16T13:12:52.016349745Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:12:52.032407 dockerd[1851]: time="2025-12-16T13:12:52.032336577Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:12:52.216165 dockerd[1851]: time="2025-12-16T13:12:52.216113927Z" level=info msg="Loading containers: start." Dec 16 13:12:52.235681 kernel: Initializing XFRM netlink socket Dec 16 13:12:52.560220 systemd-networkd[1419]: docker0: Link UP Dec 16 13:12:52.565822 dockerd[1851]: time="2025-12-16T13:12:52.565774898Z" level=info msg="Loading containers: done." Dec 16 13:12:52.584511 dockerd[1851]: time="2025-12-16T13:12:52.584449450Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:12:52.584730 dockerd[1851]: time="2025-12-16T13:12:52.584571988Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:12:52.584730 dockerd[1851]: time="2025-12-16T13:12:52.584710906Z" level=info msg="Initializing buildkit" Dec 16 13:12:52.586290 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3926656954-merged.mount: Deactivated successfully. Dec 16 13:12:52.617672 dockerd[1851]: time="2025-12-16T13:12:52.617624452Z" level=info msg="Completed buildkit initialization" Dec 16 13:12:52.627506 dockerd[1851]: time="2025-12-16T13:12:52.627459813Z" level=info msg="Daemon has completed initialization" Dec 16 13:12:52.628135 dockerd[1851]: time="2025-12-16T13:12:52.627675217Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:12:52.627757 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:12:53.419552 containerd[1496]: time="2025-12-16T13:12:53.419501803Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:12:53.852055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233871627.mount: Deactivated successfully. Dec 16 13:12:55.262416 containerd[1496]: time="2025-12-16T13:12:55.262340914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:55.263943 containerd[1496]: time="2025-12-16T13:12:55.263726573Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27075656" Dec 16 13:12:55.265181 containerd[1496]: time="2025-12-16T13:12:55.265138069Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:55.268371 containerd[1496]: time="2025-12-16T13:12:55.268331167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:55.269734 containerd[1496]: time="2025-12-16T13:12:55.269691654Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.850141591s" Dec 16 13:12:55.269829 containerd[1496]: time="2025-12-16T13:12:55.269743630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:12:55.270638 containerd[1496]: time="2025-12-16T13:12:55.270409546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:12:56.545543 containerd[1496]: time="2025-12-16T13:12:56.545475177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:56.547014 containerd[1496]: time="2025-12-16T13:12:56.546859937Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21164374" Dec 16 13:12:56.548036 containerd[1496]: time="2025-12-16T13:12:56.547996479Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:56.552656 containerd[1496]: time="2025-12-16T13:12:56.551250662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:56.552656 containerd[1496]: time="2025-12-16T13:12:56.552405755Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.281957465s" Dec 16 13:12:56.552656 containerd[1496]: time="2025-12-16T13:12:56.552447890Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:12:56.553292 containerd[1496]: time="2025-12-16T13:12:56.553261150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:12:57.500430 containerd[1496]: time="2025-12-16T13:12:57.500362972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:57.501672 containerd[1496]: time="2025-12-16T13:12:57.501620814Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15727843" Dec 16 13:12:57.502475 containerd[1496]: time="2025-12-16T13:12:57.502410901Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:57.506906 containerd[1496]: time="2025-12-16T13:12:57.506838979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:57.508301 containerd[1496]: time="2025-12-16T13:12:57.508153607Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 954.753018ms" Dec 16 13:12:57.508301 containerd[1496]: time="2025-12-16T13:12:57.508195506Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:12:57.509127 containerd[1496]: time="2025-12-16T13:12:57.509093797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:12:58.435465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:12:58.440664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:12:58.629707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309455606.mount: Deactivated successfully. Dec 16 13:12:58.756780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:12:58.769250 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:12:58.867990 kubelet[2138]: E1216 13:12:58.867911 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:12:58.874978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:12:58.875396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:12:58.876135 systemd[1]: kubelet.service: Consumed 249ms CPU time, 111.3M memory peak. Dec 16 13:12:59.349473 containerd[1496]: time="2025-12-16T13:12:59.349405524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:59.350999 containerd[1496]: time="2025-12-16T13:12:59.350749452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25967188" Dec 16 13:12:59.352183 containerd[1496]: time="2025-12-16T13:12:59.352135277Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:59.355209 containerd[1496]: time="2025-12-16T13:12:59.355168565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:12:59.356063 containerd[1496]: time="2025-12-16T13:12:59.356025477Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.846876206s" Dec 16 13:12:59.356387 containerd[1496]: time="2025-12-16T13:12:59.356169934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:12:59.356992 containerd[1496]: time="2025-12-16T13:12:59.356946769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:12:59.805632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164552805.mount: Deactivated successfully. Dec 16 13:13:01.085923 containerd[1496]: time="2025-12-16T13:13:01.085859219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:01.087366 containerd[1496]: time="2025-12-16T13:13:01.087301545Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22394649" Dec 16 13:13:01.088705 containerd[1496]: time="2025-12-16T13:13:01.088635391Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:01.092611 containerd[1496]: time="2025-12-16T13:13:01.092482218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:01.094103 containerd[1496]: time="2025-12-16T13:13:01.094061837Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.737076702s" Dec 16 13:13:01.094337 containerd[1496]: time="2025-12-16T13:13:01.094218333Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:13:01.095261 containerd[1496]: time="2025-12-16T13:13:01.095017566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:13:01.489256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20386216.mount: Deactivated successfully. Dec 16 13:13:01.497062 containerd[1496]: time="2025-12-16T13:13:01.497001577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:01.498447 containerd[1496]: time="2025-12-16T13:13:01.498167649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=322152" Dec 16 13:13:01.499915 containerd[1496]: time="2025-12-16T13:13:01.499870922Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:01.503946 containerd[1496]: time="2025-12-16T13:13:01.503900938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:01.505214 containerd[1496]: time="2025-12-16T13:13:01.505141115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 410.08558ms" Dec 16 13:13:01.505373 containerd[1496]: time="2025-12-16T13:13:01.505345061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:13:01.506281 containerd[1496]: time="2025-12-16T13:13:01.506225852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:13:01.851142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271061915.mount: Deactivated successfully. Dec 16 13:13:04.922348 containerd[1496]: time="2025-12-16T13:13:04.922271024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:04.923948 containerd[1496]: time="2025-12-16T13:13:04.923844800Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74172452" Dec 16 13:13:04.926611 containerd[1496]: time="2025-12-16T13:13:04.925144745Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:04.928887 containerd[1496]: time="2025-12-16T13:13:04.928835243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:04.930356 containerd[1496]: time="2025-12-16T13:13:04.930315821Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.424052627s" Dec 16 13:13:04.930502 containerd[1496]: time="2025-12-16T13:13:04.930478678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:13:08.883794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:13:08.886012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:08.910649 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:13:08.910797 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:13:08.911242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:08.914862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:08.960528 systemd[1]: Reload requested from client PID 2290 ('systemctl') (unit session-7.scope)... Dec 16 13:13:08.960549 systemd[1]: Reloading... Dec 16 13:13:09.120627 zram_generator::config[2335]: No configuration found. Dec 16 13:13:09.458151 systemd[1]: Reloading finished in 496 ms. Dec 16 13:13:09.542555 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:13:09.542741 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:13:09.543271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:09.543370 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.2M memory peak. Dec 16 13:13:09.546705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:09.918812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:09.931191 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:13:09.989076 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:13:09.990616 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:13:09.990616 kubelet[2387]: I1216 13:13:09.989746 2387 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:13:10.952636 kubelet[2387]: I1216 13:13:10.951640 2387 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:13:10.952636 kubelet[2387]: I1216 13:13:10.951679 2387 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:13:10.952636 kubelet[2387]: I1216 13:13:10.951715 2387 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:13:10.952636 kubelet[2387]: I1216 13:13:10.951725 2387 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:13:10.952636 kubelet[2387]: I1216 13:13:10.952310 2387 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:13:10.962386 kubelet[2387]: E1216 13:13:10.962339 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:13:10.963173 kubelet[2387]: I1216 13:13:10.963141 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:13:10.967576 kubelet[2387]: I1216 13:13:10.967545 2387 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:13:10.971862 kubelet[2387]: I1216 13:13:10.971832 2387 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:13:10.973336 kubelet[2387]: I1216 13:13:10.973277 2387 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:13:10.973560 kubelet[2387]: I1216 13:13:10.973321 2387 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:13:10.973560 kubelet[2387]: I1216 13:13:10.973557 2387 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:13:10.973793 kubelet[2387]: I1216 13:13:10.973577 2387 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:13:10.973793 kubelet[2387]: I1216 13:13:10.973733 2387 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:13:10.977106 kubelet[2387]: I1216 13:13:10.977078 2387 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:13:10.977340 kubelet[2387]: I1216 13:13:10.977322 2387 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:13:10.977425 kubelet[2387]: I1216 13:13:10.977344 2387 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:13:10.977425 kubelet[2387]: I1216 13:13:10.977377 2387 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:13:10.977425 kubelet[2387]: I1216 13:13:10.977405 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:13:10.984620 kubelet[2387]: I1216 13:13:10.983906 2387 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:13:10.984876 kubelet[2387]: I1216 13:13:10.984855 2387 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:13:10.984991 kubelet[2387]: I1216 13:13:10.984978 2387 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:13:10.985111 kubelet[2387]: W1216 13:13:10.985097 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:13:10.987682 kubelet[2387]: E1216 13:13:10.987094 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:13:10.987682 kubelet[2387]: E1216 13:13:10.987249 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:13:11.003183 kubelet[2387]: I1216 13:13:11.002473 2387 server.go:1262] "Started kubelet" Dec 16 13:13:11.011054 kubelet[2387]: I1216 13:13:11.011020 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:13:11.018614 kubelet[2387]: I1216 13:13:11.017609 2387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:13:11.020115 kubelet[2387]: I1216 13:13:11.020088 2387 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:13:11.021031 kubelet[2387]: E1216 13:13:11.018942 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal.1881b4517dd52852 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,UID:ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,},FirstTimestamp:2025-12-16 13:13:11.002417234 +0000 UTC m=+1.065980765,LastTimestamp:2025-12-16 13:13:11.002417234 +0000 UTC m=+1.065980765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,}" Dec 16 13:13:11.026436 kubelet[2387]: I1216 13:13:11.026404 2387 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:13:11.026867 kubelet[2387]: I1216 13:13:11.026822 2387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:13:11.028863 kubelet[2387]: I1216 13:13:11.028781 2387 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:13:11.028957 kubelet[2387]: I1216 13:13:11.028868 2387 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:13:11.030377 kubelet[2387]: I1216 13:13:11.030267 2387 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:13:11.030482 kubelet[2387]: I1216 13:13:11.030423 2387 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:13:11.030835 kubelet[2387]: I1216 13:13:11.030790 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:13:11.033540 kubelet[2387]: I1216 13:13:11.033503 2387 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:13:11.033733 kubelet[2387]: E1216 13:13:11.033693 2387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" Dec 16 13:13:11.044313 kubelet[2387]: I1216 13:13:11.044262 2387 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:13:11.044427 kubelet[2387]: I1216 13:13:11.044329 2387 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:13:11.058482 kubelet[2387]: I1216 13:13:11.058412 2387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:13:11.065614 kubelet[2387]: E1216 13:13:11.063428 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="200ms" Dec 16 13:13:11.065614 kubelet[2387]: E1216 13:13:11.063763 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:13:11.065614 kubelet[2387]: I1216 13:13:11.064279 2387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:13:11.065614 kubelet[2387]: I1216 13:13:11.064322 2387 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:13:11.065614 kubelet[2387]: I1216 13:13:11.064354 2387 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:13:11.065614 kubelet[2387]: E1216 13:13:11.064437 2387 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:13:11.067862 kubelet[2387]: E1216 13:13:11.067824 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:13:11.071993 kubelet[2387]: E1216 13:13:11.071855 2387 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:13:11.080083 kubelet[2387]: I1216 13:13:11.080066 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:13:11.080369 kubelet[2387]: I1216 13:13:11.080356 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:13:11.080457 kubelet[2387]: I1216 13:13:11.080449 2387 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:13:11.082673 kubelet[2387]: I1216 13:13:11.082656 2387 policy_none.go:49] "None policy: Start" Dec 16 13:13:11.082775 kubelet[2387]: I1216 13:13:11.082765 2387 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:13:11.082849 kubelet[2387]: I1216 13:13:11.082840 2387 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:13:11.084528 kubelet[2387]: I1216 13:13:11.084513 2387 policy_none.go:47] "Start" Dec 16 13:13:11.091404 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:13:11.109267 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:13:11.114624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:13:11.122790 kubelet[2387]: E1216 13:13:11.122702 2387 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:13:11.123009 kubelet[2387]: I1216 13:13:11.122971 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:13:11.123098 kubelet[2387]: I1216 13:13:11.123001 2387 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:13:11.124138 kubelet[2387]: I1216 13:13:11.124106 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:13:11.126394 kubelet[2387]: E1216 13:13:11.126330 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:13:11.126602 kubelet[2387]: E1216 13:13:11.126495 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" Dec 16 13:13:11.184580 systemd[1]: Created slice kubepods-burstable-podf0cb4e861dd4db8cda8e9347874433db.slice - libcontainer container kubepods-burstable-podf0cb4e861dd4db8cda8e9347874433db.slice. Dec 16 13:13:11.197160 kubelet[2387]: E1216 13:13:11.196868 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.204943 systemd[1]: Created slice kubepods-burstable-pod21f66068a713018ca1bcec7614929b95.slice - libcontainer container kubepods-burstable-pod21f66068a713018ca1bcec7614929b95.slice. Dec 16 13:13:11.209504 kubelet[2387]: E1216 13:13:11.209460 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.213735 systemd[1]: Created slice kubepods-burstable-podf7be057c4e2988eca6ce98b7094853eb.slice - libcontainer container kubepods-burstable-podf7be057c4e2988eca6ce98b7094853eb.slice. Dec 16 13:13:11.217541 kubelet[2387]: E1216 13:13:11.217504 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.234791 kubelet[2387]: I1216 13:13:11.234756 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.235362 kubelet[2387]: E1216 13:13:11.235299 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.244735 kubelet[2387]: I1216 13:13:11.244637 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.244735 kubelet[2387]: I1216 13:13:11.244709 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.244953 kubelet[2387]: I1216 13:13:11.244741 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.244953 kubelet[2387]: I1216 13:13:11.244779 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.244953 kubelet[2387]: I1216 13:13:11.244845 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7be057c4e2988eca6ce98b7094853eb-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f7be057c4e2988eca6ce98b7094853eb\") " pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.244953 kubelet[2387]: I1216 13:13:11.244891 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0cb4e861dd4db8cda8e9347874433db-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f0cb4e861dd4db8cda8e9347874433db\") " pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.245145 kubelet[2387]: I1216 13:13:11.244935 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0cb4e861dd4db8cda8e9347874433db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f0cb4e861dd4db8cda8e9347874433db\") " pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.245145 kubelet[2387]: I1216 13:13:11.244969 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.245145 kubelet[2387]: I1216 13:13:11.245028 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0cb4e861dd4db8cda8e9347874433db-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f0cb4e861dd4db8cda8e9347874433db\") " pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.264078 kubelet[2387]: E1216 13:13:11.264028 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="400ms" Dec 16 13:13:11.440467 kubelet[2387]: I1216 13:13:11.440411 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.440885 kubelet[2387]: E1216 13:13:11.440842 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.502836 containerd[1496]: time="2025-12-16T13:13:11.502696272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,Uid:f0cb4e861dd4db8cda8e9347874433db,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:11.514881 containerd[1496]: time="2025-12-16T13:13:11.514832160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,Uid:21f66068a713018ca1bcec7614929b95,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:11.521359 containerd[1496]: time="2025-12-16T13:13:11.521288833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,Uid:f7be057c4e2988eca6ce98b7094853eb,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:11.665059 kubelet[2387]: E1216 13:13:11.664990 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="800ms" Dec 16 13:13:11.847477 kubelet[2387]: I1216 13:13:11.847178 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.848102 kubelet[2387]: E1216 13:13:11.847754 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:11.871393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433103739.mount: Deactivated successfully. Dec 16 13:13:11.877884 containerd[1496]: time="2025-12-16T13:13:11.877837198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:11.881426 containerd[1496]: time="2025-12-16T13:13:11.881370563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Dec 16 13:13:11.882305 containerd[1496]: time="2025-12-16T13:13:11.882245247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:11.884645 containerd[1496]: time="2025-12-16T13:13:11.883898129Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:11.885507 containerd[1496]: time="2025-12-16T13:13:11.885440581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:11.886607 containerd[1496]: time="2025-12-16T13:13:11.886557188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:13:11.887507 containerd[1496]: time="2025-12-16T13:13:11.887464551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:13:11.889036 containerd[1496]: time="2025-12-16T13:13:11.888973372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:13:11.891292 containerd[1496]: time="2025-12-16T13:13:11.890134233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 385.075297ms" Dec 16 13:13:11.893828 containerd[1496]: time="2025-12-16T13:13:11.893772429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 370.968364ms" Dec 16 13:13:11.901094 containerd[1496]: time="2025-12-16T13:13:11.901041631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 384.61589ms" Dec 16 13:13:11.935759 containerd[1496]: time="2025-12-16T13:13:11.935704963Z" level=info msg="connecting to shim 787d113bef6aac2e25f5682e693e23fcbb8fa5cadddec77bd606f611e7241d29" address="unix:///run/containerd/s/10e3b8d695e3f5318e282b3d1312c7b79e095846da5b4183b1e678e4c5802cb7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:11.959622 containerd[1496]: time="2025-12-16T13:13:11.959069714Z" level=info msg="connecting to shim 6dcc94e467e3c852c042f678b753aed32fe608f5b4fb55ddd23a1c88451086ca" address="unix:///run/containerd/s/c7aa4d0dc643737d86287a7b1c44751a681a680f1c192728a3ad1b631b4aafdd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:11.965271 containerd[1496]: time="2025-12-16T13:13:11.965217494Z" level=info msg="connecting to shim b94bc89ab9ff806931cefe31f31be1b3f5532e5a11ee13c41c30ef6bb79cd2a1" address="unix:///run/containerd/s/c2b3f947ca37919c9c1403574ddddae605d0b8f881c68d059e04fc840326c87c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:12.013006 systemd[1]: Started cri-containerd-6dcc94e467e3c852c042f678b753aed32fe608f5b4fb55ddd23a1c88451086ca.scope - libcontainer container 6dcc94e467e3c852c042f678b753aed32fe608f5b4fb55ddd23a1c88451086ca. Dec 16 13:13:12.025008 systemd[1]: Started cri-containerd-787d113bef6aac2e25f5682e693e23fcbb8fa5cadddec77bd606f611e7241d29.scope - libcontainer container 787d113bef6aac2e25f5682e693e23fcbb8fa5cadddec77bd606f611e7241d29. Dec 16 13:13:12.035605 systemd[1]: Started cri-containerd-b94bc89ab9ff806931cefe31f31be1b3f5532e5a11ee13c41c30ef6bb79cd2a1.scope - libcontainer container b94bc89ab9ff806931cefe31f31be1b3f5532e5a11ee13c41c30ef6bb79cd2a1. Dec 16 13:13:12.155289 containerd[1496]: time="2025-12-16T13:13:12.155034688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,Uid:f0cb4e861dd4db8cda8e9347874433db,Namespace:kube-system,Attempt:0,} returns sandbox id \"787d113bef6aac2e25f5682e693e23fcbb8fa5cadddec77bd606f611e7241d29\"" Dec 16 13:13:12.164390 kubelet[2387]: E1216 13:13:12.163794 2387 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-21291" Dec 16 13:13:12.192696 containerd[1496]: time="2025-12-16T13:13:12.192635516Z" level=info msg="CreateContainer within sandbox \"787d113bef6aac2e25f5682e693e23fcbb8fa5cadddec77bd606f611e7241d29\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:13:12.193226 containerd[1496]: time="2025-12-16T13:13:12.193122251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,Uid:21f66068a713018ca1bcec7614929b95,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dcc94e467e3c852c042f678b753aed32fe608f5b4fb55ddd23a1c88451086ca\"" Dec 16 13:13:12.196261 kubelet[2387]: E1216 13:13:12.196070 2387 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flat" Dec 16 13:13:12.201781 containerd[1496]: time="2025-12-16T13:13:12.201727696Z" level=info msg="CreateContainer within sandbox \"6dcc94e467e3c852c042f678b753aed32fe608f5b4fb55ddd23a1c88451086ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:13:12.209626 containerd[1496]: time="2025-12-16T13:13:12.209556287Z" level=info msg="Container 79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:12.215369 containerd[1496]: time="2025-12-16T13:13:12.215253584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal,Uid:f7be057c4e2988eca6ce98b7094853eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b94bc89ab9ff806931cefe31f31be1b3f5532e5a11ee13c41c30ef6bb79cd2a1\"" Dec 16 13:13:12.218103 kubelet[2387]: E1216 13:13:12.218022 2387 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-21291" Dec 16 13:13:12.221283 containerd[1496]: time="2025-12-16T13:13:12.221242319Z" level=info msg="Container fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:12.222679 containerd[1496]: time="2025-12-16T13:13:12.222644619Z" level=info msg="CreateContainer within sandbox \"b94bc89ab9ff806931cefe31f31be1b3f5532e5a11ee13c41c30ef6bb79cd2a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:13:12.226488 containerd[1496]: time="2025-12-16T13:13:12.226427255Z" level=info msg="CreateContainer within sandbox \"787d113bef6aac2e25f5682e693e23fcbb8fa5cadddec77bd606f611e7241d29\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116\"" Dec 16 13:13:12.227244 containerd[1496]: time="2025-12-16T13:13:12.227187360Z" level=info msg="StartContainer for \"79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116\"" Dec 16 13:13:12.229521 containerd[1496]: time="2025-12-16T13:13:12.229482269Z" level=info msg="connecting to shim 79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116" address="unix:///run/containerd/s/10e3b8d695e3f5318e282b3d1312c7b79e095846da5b4183b1e678e4c5802cb7" protocol=ttrpc version=3 Dec 16 13:13:12.238066 containerd[1496]: time="2025-12-16T13:13:12.238030800Z" level=info msg="CreateContainer within sandbox \"6dcc94e467e3c852c042f678b753aed32fe608f5b4fb55ddd23a1c88451086ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0\"" Dec 16 13:13:12.238547 containerd[1496]: time="2025-12-16T13:13:12.238483807Z" level=info msg="Container af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:12.241019 containerd[1496]: time="2025-12-16T13:13:12.240949451Z" level=info msg="StartContainer for \"fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0\"" Dec 16 13:13:12.243226 containerd[1496]: time="2025-12-16T13:13:12.243165461Z" level=info msg="connecting to shim fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0" address="unix:///run/containerd/s/c7aa4d0dc643737d86287a7b1c44751a681a680f1c192728a3ad1b631b4aafdd" protocol=ttrpc version=3 Dec 16 13:13:12.254422 containerd[1496]: time="2025-12-16T13:13:12.254324593Z" level=info msg="CreateContainer within sandbox \"b94bc89ab9ff806931cefe31f31be1b3f5532e5a11ee13c41c30ef6bb79cd2a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5\"" Dec 16 13:13:12.256539 containerd[1496]: time="2025-12-16T13:13:12.256480167Z" level=info msg="StartContainer for \"af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5\"" Dec 16 13:13:12.261688 containerd[1496]: time="2025-12-16T13:13:12.261647277Z" level=info msg="connecting to shim af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5" address="unix:///run/containerd/s/c2b3f947ca37919c9c1403574ddddae605d0b8f881c68d059e04fc840326c87c" protocol=ttrpc version=3 Dec 16 13:13:12.279045 systemd[1]: Started cri-containerd-79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116.scope - libcontainer container 79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116. Dec 16 13:13:12.292873 systemd[1]: Started cri-containerd-fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0.scope - libcontainer container fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0. Dec 16 13:13:12.315809 systemd[1]: Started cri-containerd-af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5.scope - libcontainer container af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5. Dec 16 13:13:12.413461 containerd[1496]: time="2025-12-16T13:13:12.412229256Z" level=info msg="StartContainer for \"79fbcd9f21d1fa422f5e80334e143021e6ef34f0d94679ae25bb006a5e20e116\" returns successfully" Dec 16 13:13:12.439136 containerd[1496]: time="2025-12-16T13:13:12.439090427Z" level=info msg="StartContainer for \"fc56e2b78d6f6245f739ab42aeafca911464b7ede68e52b73242709b4a8f21c0\" returns successfully" Dec 16 13:13:12.466175 kubelet[2387]: E1216 13:13:12.466111 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="1.6s" Dec 16 13:13:12.515844 kubelet[2387]: E1216 13:13:12.515787 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:13:12.532641 containerd[1496]: time="2025-12-16T13:13:12.532556924Z" level=info msg="StartContainer for \"af9eb7dba3ec07cf618af02663671c7a1469cd159cee2bd93a853d1958b8fcb5\" returns successfully" Dec 16 13:13:12.653249 kubelet[2387]: I1216 13:13:12.653206 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:13.091376 kubelet[2387]: E1216 13:13:13.091332 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:13.093608 kubelet[2387]: E1216 13:13:13.093480 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:13.100610 kubelet[2387]: E1216 13:13:13.099957 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:14.103612 kubelet[2387]: E1216 13:13:14.103492 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:14.104570 kubelet[2387]: E1216 13:13:14.104351 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:15.539899 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:13:16.100135 kubelet[2387]: I1216 13:13:16.100087 2387 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.100135 kubelet[2387]: E1216 13:13:16.100141 2387 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\": node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" not found" Dec 16 13:13:16.135052 kubelet[2387]: I1216 13:13:16.135008 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.156060 kubelet[2387]: E1216 13:13:16.155799 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.156060 kubelet[2387]: I1216 13:13:16.155839 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.162811 kubelet[2387]: E1216 13:13:16.162758 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.162811 kubelet[2387]: I1216 13:13:16.162798 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.170079 kubelet[2387]: E1216 13:13:16.170041 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.378250 kubelet[2387]: I1216 13:13:16.376891 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.379641 kubelet[2387]: E1216 13:13:16.379578 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:16.986670 kubelet[2387]: I1216 13:13:16.986623 2387 apiserver.go:52] "Watching apiserver" Dec 16 13:13:17.017541 kubelet[2387]: I1216 13:13:17.017497 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:17.024219 kubelet[2387]: I1216 13:13:17.024184 2387 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:13:17.045489 kubelet[2387]: I1216 13:13:17.045448 2387 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:13:17.916694 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-7.scope)... Dec 16 13:13:17.916716 systemd[1]: Reloading... Dec 16 13:13:18.079621 zram_generator::config[2721]: No configuration found. Dec 16 13:13:18.389448 systemd[1]: Reloading finished in 472 ms. Dec 16 13:13:18.435149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:18.436687 kubelet[2387]: I1216 13:13:18.436603 2387 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:13:18.449871 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:13:18.450235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:18.450320 systemd[1]: kubelet.service: Consumed 1.626s CPU time, 124.9M memory peak. Dec 16 13:13:18.453052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:18.811078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:18.826177 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:13:18.905964 kubelet[2766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:13:18.905964 kubelet[2766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:13:18.907164 kubelet[2766]: I1216 13:13:18.906046 2766 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:13:18.922327 kubelet[2766]: I1216 13:13:18.922267 2766 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:13:18.922327 kubelet[2766]: I1216 13:13:18.922301 2766 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:13:18.922327 kubelet[2766]: I1216 13:13:18.922335 2766 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:13:18.922656 kubelet[2766]: I1216 13:13:18.922345 2766 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:13:18.922729 kubelet[2766]: I1216 13:13:18.922706 2766 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:13:18.924254 kubelet[2766]: I1216 13:13:18.924212 2766 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:13:18.928341 kubelet[2766]: I1216 13:13:18.927782 2766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:13:18.934754 kubelet[2766]: I1216 13:13:18.934460 2766 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:13:18.939710 kubelet[2766]: I1216 13:13:18.939660 2766 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:13:18.940779 kubelet[2766]: I1216 13:13:18.940175 2766 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:13:18.940779 kubelet[2766]: I1216 13:13:18.940219 2766 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:13:18.940779 kubelet[2766]: I1216 13:13:18.940451 2766 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:13:18.940779 kubelet[2766]: I1216 13:13:18.940466 2766 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:13:18.941117 kubelet[2766]: I1216 13:13:18.940508 2766 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:13:18.942387 kubelet[2766]: I1216 13:13:18.942366 2766 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:13:18.942775 kubelet[2766]: I1216 13:13:18.942758 2766 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:13:18.944611 kubelet[2766]: I1216 13:13:18.943534 2766 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:13:18.944611 kubelet[2766]: I1216 13:13:18.943580 2766 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:13:18.944611 kubelet[2766]: I1216 13:13:18.943628 2766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:13:18.947198 kubelet[2766]: I1216 13:13:18.947111 2766 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:13:18.949318 sudo[2780]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:13:18.949898 sudo[2780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:13:18.958736 kubelet[2766]: I1216 13:13:18.958708 2766 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:13:18.959186 kubelet[2766]: I1216 13:13:18.959162 2766 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:13:19.018855 kubelet[2766]: I1216 13:13:19.018822 2766 server.go:1262] "Started kubelet" Dec 16 13:13:19.023074 kubelet[2766]: I1216 13:13:19.023048 2766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:13:19.040916 kubelet[2766]: I1216 13:13:19.039698 2766 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:13:19.043192 kubelet[2766]: I1216 13:13:19.043155 2766 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:13:19.055351 kubelet[2766]: I1216 13:13:19.055262 2766 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:13:19.063823 kubelet[2766]: I1216 13:13:19.063259 2766 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:13:19.070259 kubelet[2766]: I1216 13:13:19.069685 2766 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:13:19.070514 kubelet[2766]: I1216 13:13:19.060646 2766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:13:19.070749 kubelet[2766]: I1216 13:13:19.059226 2766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:13:19.071866 kubelet[2766]: E1216 13:13:19.071832 2766 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:13:19.072892 kubelet[2766]: I1216 13:13:19.060622 2766 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:13:19.080476 kubelet[2766]: I1216 13:13:19.079873 2766 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:13:19.080476 kubelet[2766]: I1216 13:13:19.079993 2766 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:13:19.084764 kubelet[2766]: I1216 13:13:19.084741 2766 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:13:19.088547 kubelet[2766]: I1216 13:13:19.088490 2766 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:13:19.121166 kubelet[2766]: I1216 13:13:19.120785 2766 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:13:19.132937 kubelet[2766]: I1216 13:13:19.132571 2766 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:13:19.132937 kubelet[2766]: I1216 13:13:19.132943 2766 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:13:19.133148 kubelet[2766]: I1216 13:13:19.132981 2766 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:13:19.133148 kubelet[2766]: E1216 13:13:19.133045 2766 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:13:19.234864 kubelet[2766]: E1216 13:13:19.234782 2766 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259200 2766 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259228 2766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259254 2766 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259447 2766 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259461 2766 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259487 2766 policy_none.go:49] "None policy: Start" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259503 2766 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:13:19.259582 kubelet[2766]: I1216 13:13:19.259517 2766 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:13:19.261628 kubelet[2766]: I1216 13:13:19.261229 2766 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:13:19.261628 kubelet[2766]: I1216 13:13:19.261255 2766 policy_none.go:47] "Start" Dec 16 13:13:19.276360 kubelet[2766]: E1216 13:13:19.276263 2766 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:13:19.279399 kubelet[2766]: I1216 13:13:19.276836 2766 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:13:19.279399 kubelet[2766]: I1216 13:13:19.276856 2766 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:13:19.279399 kubelet[2766]: I1216 13:13:19.277457 2766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:13:19.285491 kubelet[2766]: E1216 13:13:19.285291 2766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:13:19.400621 kubelet[2766]: I1216 13:13:19.400039 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.426615 kubelet[2766]: I1216 13:13:19.425918 2766 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.426615 kubelet[2766]: I1216 13:13:19.426412 2766 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.439462 kubelet[2766]: I1216 13:13:19.437121 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.439462 kubelet[2766]: I1216 13:13:19.437275 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.440011 kubelet[2766]: I1216 13:13:19.439985 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.452962 kubelet[2766]: I1216 13:13:19.452923 2766 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:13:19.462056 kubelet[2766]: I1216 13:13:19.461418 2766 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:13:19.462056 kubelet[2766]: E1216 13:13:19.461525 2766 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.465060 kubelet[2766]: I1216 13:13:19.463302 2766 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:13:19.493033 kubelet[2766]: I1216 13:13:19.492978 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0cb4e861dd4db8cda8e9347874433db-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f0cb4e861dd4db8cda8e9347874433db\") " pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493033 kubelet[2766]: I1216 13:13:19.493042 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0cb4e861dd4db8cda8e9347874433db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f0cb4e861dd4db8cda8e9347874433db\") " pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493269 kubelet[2766]: I1216 13:13:19.493072 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493269 kubelet[2766]: I1216 13:13:19.493097 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0cb4e861dd4db8cda8e9347874433db-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f0cb4e861dd4db8cda8e9347874433db\") " pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493269 kubelet[2766]: I1216 13:13:19.493124 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493269 kubelet[2766]: I1216 13:13:19.493150 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493480 kubelet[2766]: I1216 13:13:19.493175 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493480 kubelet[2766]: I1216 13:13:19.493201 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21f66068a713018ca1bcec7614929b95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"21f66068a713018ca1bcec7614929b95\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.493480 kubelet[2766]: I1216 13:13:19.493228 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7be057c4e2988eca6ce98b7094853eb-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" (UID: \"f7be057c4e2988eca6ce98b7094853eb\") " pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:19.676348 sudo[2780]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:19.946273 kubelet[2766]: I1216 13:13:19.945908 2766 apiserver.go:52] "Watching apiserver" Dec 16 13:13:19.971304 kubelet[2766]: I1216 13:13:19.971237 2766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:13:20.193066 kubelet[2766]: I1216 13:13:20.192998 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:20.214128 kubelet[2766]: I1216 13:13:20.213983 2766 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:13:20.214768 kubelet[2766]: E1216 13:13:20.214701 2766 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" Dec 16 13:13:20.301250 kubelet[2766]: I1216 13:13:20.301133 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" podStartSLOduration=1.3011099 podStartE2EDuration="1.3011099s" podCreationTimestamp="2025-12-16 13:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:20.281014582 +0000 UTC m=+1.448612921" watchObservedRunningTime="2025-12-16 13:13:20.3011099 +0000 UTC m=+1.468708237" Dec 16 13:13:20.323907 kubelet[2766]: I1216 13:13:20.323770 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" podStartSLOduration=1.323746211 podStartE2EDuration="1.323746211s" podCreationTimestamp="2025-12-16 13:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:20.30386828 +0000 UTC m=+1.471466617" watchObservedRunningTime="2025-12-16 13:13:20.323746211 +0000 UTC m=+1.491344549" Dec 16 13:13:21.804864 sudo[1833]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:21.850550 sshd[1832]: Connection closed by 139.178.68.195 port 41658 Dec 16 13:13:21.851081 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:21.858109 systemd[1]: sshd@6-10.128.0.4:22-139.178.68.195:41658.service: Deactivated successfully. Dec 16 13:13:21.861818 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:13:21.862260 systemd[1]: session-7.scope: Consumed 6.881s CPU time, 274.5M memory peak. Dec 16 13:13:21.865348 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:13:21.868164 systemd-logind[1471]: Removed session 7. Dec 16 13:13:24.187079 kubelet[2766]: I1216 13:13:24.187020 2766 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:13:24.187934 containerd[1496]: time="2025-12-16T13:13:24.187573848Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:13:24.188479 kubelet[2766]: I1216 13:13:24.188023 2766 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:13:24.790606 kubelet[2766]: I1216 13:13:24.790511 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal" podStartSLOduration=7.790486379 podStartE2EDuration="7.790486379s" podCreationTimestamp="2025-12-16 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:20.325769699 +0000 UTC m=+1.493368036" watchObservedRunningTime="2025-12-16 13:13:24.790486379 +0000 UTC m=+5.958084712" Dec 16 13:13:24.812749 systemd[1]: Created slice kubepods-besteffort-pod5c64cacc_f97c_441c_93f8_a362ef1c37e9.slice - libcontainer container kubepods-besteffort-pod5c64cacc_f97c_441c_93f8_a362ef1c37e9.slice. Dec 16 13:13:24.830763 kubelet[2766]: I1216 13:13:24.830194 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq5cx\" (UniqueName: \"kubernetes.io/projected/5c64cacc-f97c-441c-93f8-a362ef1c37e9-kube-api-access-pq5cx\") pod \"kube-proxy-5bglc\" (UID: \"5c64cacc-f97c-441c-93f8-a362ef1c37e9\") " pod="kube-system/kube-proxy-5bglc" Dec 16 13:13:24.830763 kubelet[2766]: I1216 13:13:24.830246 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c64cacc-f97c-441c-93f8-a362ef1c37e9-xtables-lock\") pod \"kube-proxy-5bglc\" (UID: \"5c64cacc-f97c-441c-93f8-a362ef1c37e9\") " pod="kube-system/kube-proxy-5bglc" Dec 16 13:13:24.830763 kubelet[2766]: I1216 13:13:24.830276 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c64cacc-f97c-441c-93f8-a362ef1c37e9-lib-modules\") pod \"kube-proxy-5bglc\" (UID: \"5c64cacc-f97c-441c-93f8-a362ef1c37e9\") " pod="kube-system/kube-proxy-5bglc" Dec 16 13:13:24.830763 kubelet[2766]: I1216 13:13:24.830312 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c64cacc-f97c-441c-93f8-a362ef1c37e9-kube-proxy\") pod \"kube-proxy-5bglc\" (UID: \"5c64cacc-f97c-441c-93f8-a362ef1c37e9\") " pod="kube-system/kube-proxy-5bglc" Dec 16 13:13:24.842887 systemd[1]: Created slice kubepods-burstable-poddef05b8a_2050_4643_9742_8d47acffc818.slice - libcontainer container kubepods-burstable-poddef05b8a_2050_4643_9742_8d47acffc818.slice. Dec 16 13:13:24.930644 kubelet[2766]: I1216 13:13:24.930504 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cni-path\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.930917 kubelet[2766]: I1216 13:13:24.930863 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-net\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.930917 kubelet[2766]: I1216 13:13:24.930889 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-hubble-tls\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931171 kubelet[2766]: I1216 13:13:24.931125 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-hostproc\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931693 kubelet[2766]: I1216 13:13:24.931156 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-run\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931801 kubelet[2766]: I1216 13:13:24.931720 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-cgroup\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931801 kubelet[2766]: I1216 13:13:24.931773 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-lib-modules\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931910 kubelet[2766]: I1216 13:13:24.931810 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/def05b8a-2050-4643-9742-8d47acffc818-clustermesh-secrets\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931910 kubelet[2766]: I1216 13:13:24.931838 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmbj\" (UniqueName: \"kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-kube-api-access-pcmbj\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.931910 kubelet[2766]: I1216 13:13:24.931883 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-bpf-maps\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.932080 kubelet[2766]: I1216 13:13:24.931908 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-etc-cni-netd\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.932080 kubelet[2766]: I1216 13:13:24.931945 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-xtables-lock\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.932080 kubelet[2766]: I1216 13:13:24.931971 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/def05b8a-2050-4643-9742-8d47acffc818-cilium-config-path\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.932080 kubelet[2766]: I1216 13:13:24.931996 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-kernel\") pod \"cilium-z7qf7\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " pod="kube-system/cilium-z7qf7" Dec 16 13:13:24.943042 kubelet[2766]: E1216 13:13:24.942884 2766 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:13:24.943042 kubelet[2766]: E1216 13:13:24.942962 2766 projected.go:196] Error preparing data for projected volume kube-api-access-pq5cx for pod kube-system/kube-proxy-5bglc: configmap "kube-root-ca.crt" not found Dec 16 13:13:24.943543 kubelet[2766]: E1216 13:13:24.943296 2766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c64cacc-f97c-441c-93f8-a362ef1c37e9-kube-api-access-pq5cx podName:5c64cacc-f97c-441c-93f8-a362ef1c37e9 nodeName:}" failed. No retries permitted until 2025-12-16 13:13:25.443260279 +0000 UTC m=+6.610858613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pq5cx" (UniqueName: "kubernetes.io/projected/5c64cacc-f97c-441c-93f8-a362ef1c37e9-kube-api-access-pq5cx") pod "kube-proxy-5bglc" (UID: "5c64cacc-f97c-441c-93f8-a362ef1c37e9") : configmap "kube-root-ca.crt" not found Dec 16 13:13:25.053638 kubelet[2766]: E1216 13:13:25.053473 2766 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:13:25.053638 kubelet[2766]: E1216 13:13:25.053537 2766 projected.go:196] Error preparing data for projected volume kube-api-access-pcmbj for pod kube-system/cilium-z7qf7: configmap "kube-root-ca.crt" not found Dec 16 13:13:25.055623 kubelet[2766]: E1216 13:13:25.053979 2766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-kube-api-access-pcmbj podName:def05b8a-2050-4643-9742-8d47acffc818 nodeName:}" failed. No retries permitted until 2025-12-16 13:13:25.553661985 +0000 UTC m=+6.721260311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pcmbj" (UniqueName: "kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-kube-api-access-pcmbj") pod "cilium-z7qf7" (UID: "def05b8a-2050-4643-9742-8d47acffc818") : configmap "kube-root-ca.crt" not found Dec 16 13:13:25.341248 systemd[1]: Created slice kubepods-besteffort-pod8636be1b_1d1b_4cf3_870b_fc7fff7b5169.slice - libcontainer container kubepods-besteffort-pod8636be1b_1d1b_4cf3_870b_fc7fff7b5169.slice. Dec 16 13:13:25.435565 kubelet[2766]: I1216 13:13:25.435512 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-lpm77\" (UID: \"8636be1b-1d1b-4cf3-870b-fc7fff7b5169\") " pod="kube-system/cilium-operator-6f9c7c5859-lpm77" Dec 16 13:13:25.435565 kubelet[2766]: I1216 13:13:25.435620 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6sgs\" (UniqueName: \"kubernetes.io/projected/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-kube-api-access-w6sgs\") pod \"cilium-operator-6f9c7c5859-lpm77\" (UID: \"8636be1b-1d1b-4cf3-870b-fc7fff7b5169\") " pod="kube-system/cilium-operator-6f9c7c5859-lpm77" Dec 16 13:13:25.651756 containerd[1496]: time="2025-12-16T13:13:25.651628371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-lpm77,Uid:8636be1b-1d1b-4cf3-870b-fc7fff7b5169,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:25.673624 containerd[1496]: time="2025-12-16T13:13:25.673105524Z" level=info msg="connecting to shim 3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0" address="unix:///run/containerd/s/3bc382812a756e12ce5423e9100039475fc253d2cb997e268d0ac82732bef1ec" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:25.703894 systemd[1]: Started cri-containerd-3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0.scope - libcontainer container 3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0. Dec 16 13:13:25.740970 containerd[1496]: time="2025-12-16T13:13:25.740921533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5bglc,Uid:5c64cacc-f97c-441c-93f8-a362ef1c37e9,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:25.755778 containerd[1496]: time="2025-12-16T13:13:25.755728879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7qf7,Uid:def05b8a-2050-4643-9742-8d47acffc818,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:25.789144 containerd[1496]: time="2025-12-16T13:13:25.789078760Z" level=info msg="connecting to shim d7e1a1660fd0777115298bf619ae3786552ea28e0bc66624a3a534329c0fa5e0" address="unix:///run/containerd/s/7dc763fc67d37f0f0305469145f7c83709a4aafbc0f5916073c6dca269f423f5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:25.799227 containerd[1496]: time="2025-12-16T13:13:25.799169640Z" level=info msg="connecting to shim c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21" address="unix:///run/containerd/s/3e762884b646a6edd9b34406dc85a4122452f056b1b7aac672d5f183e76aa023" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:25.812964 containerd[1496]: time="2025-12-16T13:13:25.812849069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-lpm77,Uid:8636be1b-1d1b-4cf3-870b-fc7fff7b5169,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\"" Dec 16 13:13:25.821348 containerd[1496]: time="2025-12-16T13:13:25.821276009Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:13:25.856956 systemd[1]: Started cri-containerd-d7e1a1660fd0777115298bf619ae3786552ea28e0bc66624a3a534329c0fa5e0.scope - libcontainer container d7e1a1660fd0777115298bf619ae3786552ea28e0bc66624a3a534329c0fa5e0. Dec 16 13:13:25.871065 systemd[1]: Started cri-containerd-c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21.scope - libcontainer container c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21. Dec 16 13:13:25.928313 containerd[1496]: time="2025-12-16T13:13:25.928099797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5bglc,Uid:5c64cacc-f97c-441c-93f8-a362ef1c37e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7e1a1660fd0777115298bf619ae3786552ea28e0bc66624a3a534329c0fa5e0\"" Dec 16 13:13:25.930390 containerd[1496]: time="2025-12-16T13:13:25.930099367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7qf7,Uid:def05b8a-2050-4643-9742-8d47acffc818,Namespace:kube-system,Attempt:0,} returns sandbox id \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\"" Dec 16 13:13:25.939773 containerd[1496]: time="2025-12-16T13:13:25.939732999Z" level=info msg="CreateContainer within sandbox \"d7e1a1660fd0777115298bf619ae3786552ea28e0bc66624a3a534329c0fa5e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:13:25.952035 containerd[1496]: time="2025-12-16T13:13:25.951975531Z" level=info msg="Container 0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:25.960525 containerd[1496]: time="2025-12-16T13:13:25.960470020Z" level=info msg="CreateContainer within sandbox \"d7e1a1660fd0777115298bf619ae3786552ea28e0bc66624a3a534329c0fa5e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f\"" Dec 16 13:13:25.961556 containerd[1496]: time="2025-12-16T13:13:25.961520110Z" level=info msg="StartContainer for \"0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f\"" Dec 16 13:13:25.964191 containerd[1496]: time="2025-12-16T13:13:25.964073826Z" level=info msg="connecting to shim 0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f" address="unix:///run/containerd/s/7dc763fc67d37f0f0305469145f7c83709a4aafbc0f5916073c6dca269f423f5" protocol=ttrpc version=3 Dec 16 13:13:25.988844 systemd[1]: Started cri-containerd-0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f.scope - libcontainer container 0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f. Dec 16 13:13:26.104478 containerd[1496]: time="2025-12-16T13:13:26.104424847Z" level=info msg="StartContainer for \"0dc392e37d989acadb38190a52f9bf74dd6696edc252b4136f5fe059ec665b0f\" returns successfully" Dec 16 13:13:26.235212 kubelet[2766]: I1216 13:13:26.235046 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5bglc" podStartSLOduration=2.235024076 podStartE2EDuration="2.235024076s" podCreationTimestamp="2025-12-16 13:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:26.234630106 +0000 UTC m=+7.402228465" watchObservedRunningTime="2025-12-16 13:13:26.235024076 +0000 UTC m=+7.402622415" Dec 16 13:13:28.094752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510212868.mount: Deactivated successfully. Dec 16 13:13:28.991197 containerd[1496]: time="2025-12-16T13:13:28.991129741Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:28.992556 containerd[1496]: time="2025-12-16T13:13:28.992272811Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:13:28.993842 containerd[1496]: time="2025-12-16T13:13:28.993805130Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:28.995464 containerd[1496]: time="2025-12-16T13:13:28.995426907Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.174097031s" Dec 16 13:13:28.995656 containerd[1496]: time="2025-12-16T13:13:28.995627409Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:13:28.997704 containerd[1496]: time="2025-12-16T13:13:28.997395590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:13:29.001751 containerd[1496]: time="2025-12-16T13:13:29.001711482Z" level=info msg="CreateContainer within sandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:13:29.012728 containerd[1496]: time="2025-12-16T13:13:29.012692813Z" level=info msg="Container 51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:29.025180 containerd[1496]: time="2025-12-16T13:13:29.025126202Z" level=info msg="CreateContainer within sandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\"" Dec 16 13:13:29.025891 containerd[1496]: time="2025-12-16T13:13:29.025829791Z" level=info msg="StartContainer for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\"" Dec 16 13:13:29.027685 containerd[1496]: time="2025-12-16T13:13:29.027647553Z" level=info msg="connecting to shim 51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b" address="unix:///run/containerd/s/3bc382812a756e12ce5423e9100039475fc253d2cb997e268d0ac82732bef1ec" protocol=ttrpc version=3 Dec 16 13:13:29.068132 systemd[1]: Started cri-containerd-51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b.scope - libcontainer container 51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b. Dec 16 13:13:29.114178 containerd[1496]: time="2025-12-16T13:13:29.114090624Z" level=info msg="StartContainer for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" returns successfully" Dec 16 13:13:29.256699 kubelet[2766]: I1216 13:13:29.255957 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-lpm77" podStartSLOduration=1.076415619 podStartE2EDuration="4.255933782s" podCreationTimestamp="2025-12-16 13:13:25 +0000 UTC" firstStartedPulling="2025-12-16 13:13:25.81709055 +0000 UTC m=+6.984688871" lastFinishedPulling="2025-12-16 13:13:28.996608696 +0000 UTC m=+10.164207034" observedRunningTime="2025-12-16 13:13:29.255142404 +0000 UTC m=+10.422740730" watchObservedRunningTime="2025-12-16 13:13:29.255933782 +0000 UTC m=+10.423532124" Dec 16 13:13:29.534805 update_engine[1479]: I20251216 13:13:29.534634 1479 update_attempter.cc:509] Updating boot flags... Dec 16 13:13:35.304248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667489750.mount: Deactivated successfully. Dec 16 13:13:38.186073 containerd[1496]: time="2025-12-16T13:13:38.186000348Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:38.187352 containerd[1496]: time="2025-12-16T13:13:38.187304473Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:13:38.190769 containerd[1496]: time="2025-12-16T13:13:38.190277528Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:38.192345 containerd[1496]: time="2025-12-16T13:13:38.192303482Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.194868252s" Dec 16 13:13:38.192517 containerd[1496]: time="2025-12-16T13:13:38.192489616Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:13:38.198864 containerd[1496]: time="2025-12-16T13:13:38.198821256Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:13:38.217642 containerd[1496]: time="2025-12-16T13:13:38.212567166Z" level=info msg="Container 2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:38.224474 containerd[1496]: time="2025-12-16T13:13:38.224415870Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\"" Dec 16 13:13:38.225616 containerd[1496]: time="2025-12-16T13:13:38.225327724Z" level=info msg="StartContainer for \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\"" Dec 16 13:13:38.226863 containerd[1496]: time="2025-12-16T13:13:38.226795739Z" level=info msg="connecting to shim 2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7" address="unix:///run/containerd/s/3e762884b646a6edd9b34406dc85a4122452f056b1b7aac672d5f183e76aa023" protocol=ttrpc version=3 Dec 16 13:13:38.266849 systemd[1]: Started cri-containerd-2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7.scope - libcontainer container 2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7. Dec 16 13:13:38.313131 containerd[1496]: time="2025-12-16T13:13:38.312977200Z" level=info msg="StartContainer for \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\" returns successfully" Dec 16 13:13:38.330041 systemd[1]: cri-containerd-2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7.scope: Deactivated successfully. Dec 16 13:13:38.335638 containerd[1496]: time="2025-12-16T13:13:38.335566368Z" level=info msg="received container exit event container_id:\"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\" id:\"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\" pid:3268 exited_at:{seconds:1765890818 nanos:334770360}" Dec 16 13:13:38.370178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7-rootfs.mount: Deactivated successfully. Dec 16 13:13:41.318806 containerd[1496]: time="2025-12-16T13:13:41.318746199Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:13:41.331480 containerd[1496]: time="2025-12-16T13:13:41.331431551Z" level=info msg="Container e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:41.344848 containerd[1496]: time="2025-12-16T13:13:41.344796642Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\"" Dec 16 13:13:41.345521 containerd[1496]: time="2025-12-16T13:13:41.345485451Z" level=info msg="StartContainer for \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\"" Dec 16 13:13:41.347626 containerd[1496]: time="2025-12-16T13:13:41.346945119Z" level=info msg="connecting to shim e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd" address="unix:///run/containerd/s/3e762884b646a6edd9b34406dc85a4122452f056b1b7aac672d5f183e76aa023" protocol=ttrpc version=3 Dec 16 13:13:41.390873 systemd[1]: Started cri-containerd-e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd.scope - libcontainer container e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd. Dec 16 13:13:41.438100 containerd[1496]: time="2025-12-16T13:13:41.438051653Z" level=info msg="StartContainer for \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\" returns successfully" Dec 16 13:13:41.456338 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:13:41.456788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:41.458857 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:41.462538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:41.466752 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:13:41.467469 systemd[1]: cri-containerd-e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd.scope: Deactivated successfully. Dec 16 13:13:41.474266 containerd[1496]: time="2025-12-16T13:13:41.473520994Z" level=info msg="received container exit event container_id:\"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\" id:\"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\" pid:3311 exited_at:{seconds:1765890821 nanos:467994880}" Dec 16 13:13:41.509396 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:42.328074 containerd[1496]: time="2025-12-16T13:13:42.327999430Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:13:42.337691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd-rootfs.mount: Deactivated successfully. Dec 16 13:13:42.357617 containerd[1496]: time="2025-12-16T13:13:42.356401809Z" level=info msg="Container 674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:42.370385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067588584.mount: Deactivated successfully. Dec 16 13:13:42.380958 containerd[1496]: time="2025-12-16T13:13:42.380897729Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\"" Dec 16 13:13:42.382335 containerd[1496]: time="2025-12-16T13:13:42.382305063Z" level=info msg="StartContainer for \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\"" Dec 16 13:13:42.386848 containerd[1496]: time="2025-12-16T13:13:42.385574436Z" level=info msg="connecting to shim 674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6" address="unix:///run/containerd/s/3e762884b646a6edd9b34406dc85a4122452f056b1b7aac672d5f183e76aa023" protocol=ttrpc version=3 Dec 16 13:13:42.425854 systemd[1]: Started cri-containerd-674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6.scope - libcontainer container 674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6. Dec 16 13:13:42.521242 containerd[1496]: time="2025-12-16T13:13:42.521186078Z" level=info msg="StartContainer for \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\" returns successfully" Dec 16 13:13:42.523911 systemd[1]: cri-containerd-674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6.scope: Deactivated successfully. Dec 16 13:13:42.525468 containerd[1496]: time="2025-12-16T13:13:42.525425429Z" level=info msg="received container exit event container_id:\"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\" id:\"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\" pid:3359 exited_at:{seconds:1765890822 nanos:523997560}" Dec 16 13:13:42.561568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6-rootfs.mount: Deactivated successfully. Dec 16 13:13:43.334628 containerd[1496]: time="2025-12-16T13:13:43.334551288Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:13:43.361452 containerd[1496]: time="2025-12-16T13:13:43.361353123Z" level=info msg="Container 773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:43.366990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60624130.mount: Deactivated successfully. Dec 16 13:13:43.377176 containerd[1496]: time="2025-12-16T13:13:43.375769202Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\"" Dec 16 13:13:43.377576 containerd[1496]: time="2025-12-16T13:13:43.377497250Z" level=info msg="StartContainer for \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\"" Dec 16 13:13:43.379239 containerd[1496]: time="2025-12-16T13:13:43.379184135Z" level=info msg="connecting to shim 773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc" address="unix:///run/containerd/s/3e762884b646a6edd9b34406dc85a4122452f056b1b7aac672d5f183e76aa023" protocol=ttrpc version=3 Dec 16 13:13:43.411852 systemd[1]: Started cri-containerd-773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc.scope - libcontainer container 773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc. Dec 16 13:13:43.450448 systemd[1]: cri-containerd-773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc.scope: Deactivated successfully. Dec 16 13:13:43.457188 containerd[1496]: time="2025-12-16T13:13:43.457141609Z" level=info msg="received container exit event container_id:\"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\" id:\"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\" pid:3398 exited_at:{seconds:1765890823 nanos:456545187}" Dec 16 13:13:43.459415 containerd[1496]: time="2025-12-16T13:13:43.459276155Z" level=info msg="StartContainer for \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\" returns successfully" Dec 16 13:13:43.489341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc-rootfs.mount: Deactivated successfully. Dec 16 13:13:44.341565 containerd[1496]: time="2025-12-16T13:13:44.341509190Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:13:44.367582 containerd[1496]: time="2025-12-16T13:13:44.367517708Z" level=info msg="Container 4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:44.383224 containerd[1496]: time="2025-12-16T13:13:44.383056202Z" level=info msg="CreateContainer within sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\"" Dec 16 13:13:44.384679 containerd[1496]: time="2025-12-16T13:13:44.384463154Z" level=info msg="StartContainer for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\"" Dec 16 13:13:44.387520 containerd[1496]: time="2025-12-16T13:13:44.387369558Z" level=info msg="connecting to shim 4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420" address="unix:///run/containerd/s/3e762884b646a6edd9b34406dc85a4122452f056b1b7aac672d5f183e76aa023" protocol=ttrpc version=3 Dec 16 13:13:44.416900 systemd[1]: Started cri-containerd-4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420.scope - libcontainer container 4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420. Dec 16 13:13:44.492298 containerd[1496]: time="2025-12-16T13:13:44.492227410Z" level=info msg="StartContainer for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" returns successfully" Dec 16 13:13:44.680230 kubelet[2766]: I1216 13:13:44.678661 2766 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:13:44.740204 systemd[1]: Created slice kubepods-burstable-podb23e1b0d_22cc_46e2_a58b_934d61591910.slice - libcontainer container kubepods-burstable-podb23e1b0d_22cc_46e2_a58b_934d61591910.slice. Dec 16 13:13:44.755649 systemd[1]: Created slice kubepods-burstable-podcc685394_bd34_4c8f_b540_ae5f6f3d078b.slice - libcontainer container kubepods-burstable-podcc685394_bd34_4c8f_b540_ae5f6f3d078b.slice. Dec 16 13:13:44.783298 kubelet[2766]: I1216 13:13:44.783216 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7lb\" (UniqueName: \"kubernetes.io/projected/b23e1b0d-22cc-46e2-a58b-934d61591910-kube-api-access-bl7lb\") pod \"coredns-66bc5c9577-dzk2v\" (UID: \"b23e1b0d-22cc-46e2-a58b-934d61591910\") " pod="kube-system/coredns-66bc5c9577-dzk2v" Dec 16 13:13:44.783684 kubelet[2766]: I1216 13:13:44.783412 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc685394-bd34-4c8f-b540-ae5f6f3d078b-config-volume\") pod \"coredns-66bc5c9577-6kcd2\" (UID: \"cc685394-bd34-4c8f-b540-ae5f6f3d078b\") " pod="kube-system/coredns-66bc5c9577-6kcd2" Dec 16 13:13:44.783684 kubelet[2766]: I1216 13:13:44.783675 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b23e1b0d-22cc-46e2-a58b-934d61591910-config-volume\") pod \"coredns-66bc5c9577-dzk2v\" (UID: \"b23e1b0d-22cc-46e2-a58b-934d61591910\") " pod="kube-system/coredns-66bc5c9577-dzk2v" Dec 16 13:13:44.783859 kubelet[2766]: I1216 13:13:44.783718 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzndr\" (UniqueName: \"kubernetes.io/projected/cc685394-bd34-4c8f-b540-ae5f6f3d078b-kube-api-access-zzndr\") pod \"coredns-66bc5c9577-6kcd2\" (UID: \"cc685394-bd34-4c8f-b540-ae5f6f3d078b\") " pod="kube-system/coredns-66bc5c9577-6kcd2" Dec 16 13:13:45.052300 containerd[1496]: time="2025-12-16T13:13:45.052238701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dzk2v,Uid:b23e1b0d-22cc-46e2-a58b-934d61591910,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:45.066316 containerd[1496]: time="2025-12-16T13:13:45.066188601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6kcd2,Uid:cc685394-bd34-4c8f-b540-ae5f6f3d078b,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:45.374896 kubelet[2766]: I1216 13:13:45.373913 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z7qf7" podStartSLOduration=9.112472308 podStartE2EDuration="21.373891215s" podCreationTimestamp="2025-12-16 13:13:24 +0000 UTC" firstStartedPulling="2025-12-16 13:13:25.932264831 +0000 UTC m=+7.099863145" lastFinishedPulling="2025-12-16 13:13:38.193683732 +0000 UTC m=+19.361282052" observedRunningTime="2025-12-16 13:13:45.37315982 +0000 UTC m=+26.540758163" watchObservedRunningTime="2025-12-16 13:13:45.373891215 +0000 UTC m=+26.541489553" Dec 16 13:13:46.925718 systemd-networkd[1419]: cilium_host: Link UP Dec 16 13:13:46.927435 systemd-networkd[1419]: cilium_net: Link UP Dec 16 13:13:46.928174 systemd-networkd[1419]: cilium_net: Gained carrier Dec 16 13:13:46.928462 systemd-networkd[1419]: cilium_host: Gained carrier Dec 16 13:13:47.074689 systemd-networkd[1419]: cilium_vxlan: Link UP Dec 16 13:13:47.074703 systemd-networkd[1419]: cilium_vxlan: Gained carrier Dec 16 13:13:47.158930 systemd-networkd[1419]: cilium_net: Gained IPv6LL Dec 16 13:13:47.373616 kernel: NET: Registered PF_ALG protocol family Dec 16 13:13:47.926972 systemd-networkd[1419]: cilium_host: Gained IPv6LL Dec 16 13:13:48.288848 systemd-networkd[1419]: lxc_health: Link UP Dec 16 13:13:48.298125 systemd-networkd[1419]: lxc_health: Gained carrier Dec 16 13:13:48.620631 kernel: eth0: renamed from tmp7fcaa Dec 16 13:13:48.623734 systemd-networkd[1419]: lxccb734e9a6fa1: Link UP Dec 16 13:13:48.624203 systemd-networkd[1419]: lxccb734e9a6fa1: Gained carrier Dec 16 13:13:48.632331 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Dec 16 13:13:48.645746 kernel: eth0: renamed from tmpf03c0 Dec 16 13:13:48.640041 systemd-networkd[1419]: lxc211082a9c82d: Link UP Dec 16 13:13:48.652977 systemd-networkd[1419]: lxc211082a9c82d: Gained carrier Dec 16 13:13:49.655216 systemd-networkd[1419]: lxccb734e9a6fa1: Gained IPv6LL Dec 16 13:13:49.782754 systemd-networkd[1419]: lxc_health: Gained IPv6LL Dec 16 13:13:50.551648 systemd-networkd[1419]: lxc211082a9c82d: Gained IPv6LL Dec 16 13:13:52.881476 ntpd[1664]: Listen normally on 6 cilium_host 192.168.0.98:123 Dec 16 13:13:52.882170 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 6 cilium_host 192.168.0.98:123 Dec 16 13:13:52.881580 ntpd[1664]: Listen normally on 7 cilium_net [fe80::f4f2:44ff:fee2:25b7%4]:123 Dec 16 13:13:52.882696 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 7 cilium_net [fe80::f4f2:44ff:fee2:25b7%4]:123 Dec 16 13:13:52.882696 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 8 cilium_host [fe80::c4c9:2ff:feca:5ba1%5]:123 Dec 16 13:13:52.882696 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 9 cilium_vxlan [fe80::20eb:7ff:fed1:5732%6]:123 Dec 16 13:13:52.882696 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 10 lxc_health [fe80::2031:4dff:fed0:ddf1%8]:123 Dec 16 13:13:52.882696 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 11 lxccb734e9a6fa1 [fe80::187a:21ff:fe93:9a53%10]:123 Dec 16 13:13:52.882696 ntpd[1664]: 16 Dec 13:13:52 ntpd[1664]: Listen normally on 12 lxc211082a9c82d [fe80::84d9:73ff:fe19:b49a%12]:123 Dec 16 13:13:52.882417 ntpd[1664]: Listen normally on 8 cilium_host [fe80::c4c9:2ff:feca:5ba1%5]:123 Dec 16 13:13:52.882462 ntpd[1664]: Listen normally on 9 cilium_vxlan [fe80::20eb:7ff:fed1:5732%6]:123 Dec 16 13:13:52.882503 ntpd[1664]: Listen normally on 10 lxc_health [fe80::2031:4dff:fed0:ddf1%8]:123 Dec 16 13:13:52.882546 ntpd[1664]: Listen normally on 11 lxccb734e9a6fa1 [fe80::187a:21ff:fe93:9a53%10]:123 Dec 16 13:13:52.882602 ntpd[1664]: Listen normally on 12 lxc211082a9c82d [fe80::84d9:73ff:fe19:b49a%12]:123 Dec 16 13:13:53.630641 containerd[1496]: time="2025-12-16T13:13:53.629040122Z" level=info msg="connecting to shim 7fcaa422ce236a021fd7602f3cf4eb88f37de9fd0b7e92d54d733b7ef954ad68" address="unix:///run/containerd/s/845abb7ca40dc0d3777e5c48c8536581975256f1a6d38579a6956597b33beb9c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:53.637155 containerd[1496]: time="2025-12-16T13:13:53.636308368Z" level=info msg="connecting to shim f03c0ebe59ba4753f6c170c4f58b38ce0ed3d1aca5047069f2c9072987946888" address="unix:///run/containerd/s/03d8625d9f8c56bc936d4bd3363cfe2e2f32a0ca2701faebd412303b9213159a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:53.715921 systemd[1]: Started cri-containerd-f03c0ebe59ba4753f6c170c4f58b38ce0ed3d1aca5047069f2c9072987946888.scope - libcontainer container f03c0ebe59ba4753f6c170c4f58b38ce0ed3d1aca5047069f2c9072987946888. Dec 16 13:13:53.725634 systemd[1]: Started cri-containerd-7fcaa422ce236a021fd7602f3cf4eb88f37de9fd0b7e92d54d733b7ef954ad68.scope - libcontainer container 7fcaa422ce236a021fd7602f3cf4eb88f37de9fd0b7e92d54d733b7ef954ad68. Dec 16 13:13:53.856814 containerd[1496]: time="2025-12-16T13:13:53.856731217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6kcd2,Uid:cc685394-bd34-4c8f-b540-ae5f6f3d078b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f03c0ebe59ba4753f6c170c4f58b38ce0ed3d1aca5047069f2c9072987946888\"" Dec 16 13:13:53.866934 containerd[1496]: time="2025-12-16T13:13:53.866852945Z" level=info msg="CreateContainer within sandbox \"f03c0ebe59ba4753f6c170c4f58b38ce0ed3d1aca5047069f2c9072987946888\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:13:53.887886 containerd[1496]: time="2025-12-16T13:13:53.887790203Z" level=info msg="Container f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:53.899291 containerd[1496]: time="2025-12-16T13:13:53.899243252Z" level=info msg="CreateContainer within sandbox \"f03c0ebe59ba4753f6c170c4f58b38ce0ed3d1aca5047069f2c9072987946888\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5\"" Dec 16 13:13:53.900636 containerd[1496]: time="2025-12-16T13:13:53.900348443Z" level=info msg="StartContainer for \"f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5\"" Dec 16 13:13:53.903249 containerd[1496]: time="2025-12-16T13:13:53.903166134Z" level=info msg="connecting to shim f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5" address="unix:///run/containerd/s/03d8625d9f8c56bc936d4bd3363cfe2e2f32a0ca2701faebd412303b9213159a" protocol=ttrpc version=3 Dec 16 13:13:53.905414 containerd[1496]: time="2025-12-16T13:13:53.905381350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dzk2v,Uid:b23e1b0d-22cc-46e2-a58b-934d61591910,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fcaa422ce236a021fd7602f3cf4eb88f37de9fd0b7e92d54d733b7ef954ad68\"" Dec 16 13:13:53.913804 containerd[1496]: time="2025-12-16T13:13:53.913726056Z" level=info msg="CreateContainer within sandbox \"7fcaa422ce236a021fd7602f3cf4eb88f37de9fd0b7e92d54d733b7ef954ad68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:13:53.932027 containerd[1496]: time="2025-12-16T13:13:53.931980327Z" level=info msg="Container 5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:53.943184 containerd[1496]: time="2025-12-16T13:13:53.943126191Z" level=info msg="CreateContainer within sandbox \"7fcaa422ce236a021fd7602f3cf4eb88f37de9fd0b7e92d54d733b7ef954ad68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487\"" Dec 16 13:13:53.943760 containerd[1496]: time="2025-12-16T13:13:53.943725146Z" level=info msg="StartContainer for \"5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487\"" Dec 16 13:13:53.946265 containerd[1496]: time="2025-12-16T13:13:53.946203935Z" level=info msg="connecting to shim 5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487" address="unix:///run/containerd/s/845abb7ca40dc0d3777e5c48c8536581975256f1a6d38579a6956597b33beb9c" protocol=ttrpc version=3 Dec 16 13:13:53.947899 systemd[1]: Started cri-containerd-f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5.scope - libcontainer container f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5. Dec 16 13:13:53.983804 systemd[1]: Started cri-containerd-5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487.scope - libcontainer container 5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487. Dec 16 13:13:54.030281 containerd[1496]: time="2025-12-16T13:13:54.030238405Z" level=info msg="StartContainer for \"f49b52c5d1760105c0b225e70335784c4d5ce236548c41165bf1b59a90382eb5\" returns successfully" Dec 16 13:13:54.030742 kubelet[2766]: I1216 13:13:54.030704 2766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:13:54.074532 containerd[1496]: time="2025-12-16T13:13:54.074479153Z" level=info msg="StartContainer for \"5b3445be9844b9a790d43edf789f56024eb39b39858cef04440eec45b775a487\" returns successfully" Dec 16 13:13:54.401616 kubelet[2766]: I1216 13:13:54.401076 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6kcd2" podStartSLOduration=29.401055183 podStartE2EDuration="29.401055183s" podCreationTimestamp="2025-12-16 13:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:54.398816635 +0000 UTC m=+35.566414986" watchObservedRunningTime="2025-12-16 13:13:54.401055183 +0000 UTC m=+35.568653521" Dec 16 13:13:54.425250 kubelet[2766]: I1216 13:13:54.424987 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dzk2v" podStartSLOduration=29.424960046 podStartE2EDuration="29.424960046s" podCreationTimestamp="2025-12-16 13:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:54.421682003 +0000 UTC m=+35.589280341" watchObservedRunningTime="2025-12-16 13:13:54.424960046 +0000 UTC m=+35.592558383" Dec 16 13:13:54.600060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142726650.mount: Deactivated successfully. Dec 16 13:14:11.498198 systemd[1]: Started sshd@7-10.128.0.4:22-139.178.68.195:52208.service - OpenSSH per-connection server daemon (139.178.68.195:52208). Dec 16 13:14:11.799268 sshd[4106]: Accepted publickey for core from 139.178.68.195 port 52208 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:11.801155 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:11.809348 systemd-logind[1471]: New session 8 of user core. Dec 16 13:14:11.819858 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:14:12.131231 sshd[4109]: Connection closed by 139.178.68.195 port 52208 Dec 16 13:14:12.132638 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:12.138792 systemd[1]: sshd@7-10.128.0.4:22-139.178.68.195:52208.service: Deactivated successfully. Dec 16 13:14:12.141866 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:14:12.143695 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:14:12.145968 systemd-logind[1471]: Removed session 8. Dec 16 13:14:17.184728 systemd[1]: Started sshd@8-10.128.0.4:22-139.178.68.195:52210.service - OpenSSH per-connection server daemon (139.178.68.195:52210). Dec 16 13:14:17.496331 sshd[4125]: Accepted publickey for core from 139.178.68.195 port 52210 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:17.497103 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:17.504459 systemd-logind[1471]: New session 9 of user core. Dec 16 13:14:17.516853 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:14:17.787281 sshd[4128]: Connection closed by 139.178.68.195 port 52210 Dec 16 13:14:17.787454 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:17.794023 systemd[1]: sshd@8-10.128.0.4:22-139.178.68.195:52210.service: Deactivated successfully. Dec 16 13:14:17.797009 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:14:17.799570 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:14:17.801322 systemd-logind[1471]: Removed session 9. Dec 16 13:14:22.842168 systemd[1]: Started sshd@9-10.128.0.4:22-139.178.68.195:48234.service - OpenSSH per-connection server daemon (139.178.68.195:48234). Dec 16 13:14:23.148452 sshd[4143]: Accepted publickey for core from 139.178.68.195 port 48234 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:23.150821 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:23.158544 systemd-logind[1471]: New session 10 of user core. Dec 16 13:14:23.163893 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:14:23.441184 sshd[4146]: Connection closed by 139.178.68.195 port 48234 Dec 16 13:14:23.442069 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:23.448419 systemd[1]: sshd@9-10.128.0.4:22-139.178.68.195:48234.service: Deactivated successfully. Dec 16 13:14:23.451737 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:14:23.453448 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:14:23.456146 systemd-logind[1471]: Removed session 10. Dec 16 13:14:28.503333 systemd[1]: Started sshd@10-10.128.0.4:22-139.178.68.195:48250.service - OpenSSH per-connection server daemon (139.178.68.195:48250). Dec 16 13:14:28.818938 sshd[4161]: Accepted publickey for core from 139.178.68.195 port 48250 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:28.820725 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:28.827668 systemd-logind[1471]: New session 11 of user core. Dec 16 13:14:28.834840 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:14:29.110501 sshd[4164]: Connection closed by 139.178.68.195 port 48250 Dec 16 13:14:29.111796 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:29.117680 systemd[1]: sshd@10-10.128.0.4:22-139.178.68.195:48250.service: Deactivated successfully. Dec 16 13:14:29.121523 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:14:29.124158 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:14:29.125995 systemd-logind[1471]: Removed session 11. Dec 16 13:14:34.168242 systemd[1]: Started sshd@11-10.128.0.4:22-139.178.68.195:51604.service - OpenSSH per-connection server daemon (139.178.68.195:51604). Dec 16 13:14:34.473695 sshd[4177]: Accepted publickey for core from 139.178.68.195 port 51604 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:34.475323 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:34.482415 systemd-logind[1471]: New session 12 of user core. Dec 16 13:14:34.492900 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:14:34.764083 sshd[4180]: Connection closed by 139.178.68.195 port 51604 Dec 16 13:14:34.765401 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:34.771514 systemd[1]: sshd@11-10.128.0.4:22-139.178.68.195:51604.service: Deactivated successfully. Dec 16 13:14:34.774397 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:14:34.777017 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:14:34.779215 systemd-logind[1471]: Removed session 12. Dec 16 13:14:34.828085 systemd[1]: Started sshd@12-10.128.0.4:22-139.178.68.195:51618.service - OpenSSH per-connection server daemon (139.178.68.195:51618). Dec 16 13:14:35.140560 sshd[4194]: Accepted publickey for core from 139.178.68.195 port 51618 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:35.142026 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:35.148616 systemd-logind[1471]: New session 13 of user core. Dec 16 13:14:35.151791 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:14:35.481076 sshd[4198]: Connection closed by 139.178.68.195 port 51618 Dec 16 13:14:35.482867 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:35.488321 systemd[1]: sshd@12-10.128.0.4:22-139.178.68.195:51618.service: Deactivated successfully. Dec 16 13:14:35.491909 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:14:35.494088 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:14:35.497110 systemd-logind[1471]: Removed session 13. Dec 16 13:14:35.534888 systemd[1]: Started sshd@13-10.128.0.4:22-139.178.68.195:51622.service - OpenSSH per-connection server daemon (139.178.68.195:51622). Dec 16 13:14:35.844254 sshd[4208]: Accepted publickey for core from 139.178.68.195 port 51622 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:35.845706 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:35.853490 systemd-logind[1471]: New session 14 of user core. Dec 16 13:14:35.858857 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:14:36.146414 sshd[4211]: Connection closed by 139.178.68.195 port 51622 Dec 16 13:14:36.147703 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:36.154551 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:14:36.155384 systemd[1]: sshd@13-10.128.0.4:22-139.178.68.195:51622.service: Deactivated successfully. Dec 16 13:14:36.157911 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:14:36.160833 systemd-logind[1471]: Removed session 14. Dec 16 13:14:41.207966 systemd[1]: Started sshd@14-10.128.0.4:22-139.178.68.195:37944.service - OpenSSH per-connection server daemon (139.178.68.195:37944). Dec 16 13:14:41.521308 sshd[4223]: Accepted publickey for core from 139.178.68.195 port 37944 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:41.523052 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:41.530160 systemd-logind[1471]: New session 15 of user core. Dec 16 13:14:41.536817 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:14:41.814664 sshd[4226]: Connection closed by 139.178.68.195 port 37944 Dec 16 13:14:41.815969 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:41.822312 systemd[1]: sshd@14-10.128.0.4:22-139.178.68.195:37944.service: Deactivated successfully. Dec 16 13:14:41.826160 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:14:41.827672 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:14:41.829751 systemd-logind[1471]: Removed session 15. Dec 16 13:14:46.870885 systemd[1]: Started sshd@15-10.128.0.4:22-139.178.68.195:37948.service - OpenSSH per-connection server daemon (139.178.68.195:37948). Dec 16 13:14:47.181093 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 37948 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:47.182911 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:47.189892 systemd-logind[1471]: New session 16 of user core. Dec 16 13:14:47.194811 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:14:47.479438 sshd[4241]: Connection closed by 139.178.68.195 port 37948 Dec 16 13:14:47.480424 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:47.486703 systemd[1]: sshd@15-10.128.0.4:22-139.178.68.195:37948.service: Deactivated successfully. Dec 16 13:14:47.490217 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:14:47.491688 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:14:47.493929 systemd-logind[1471]: Removed session 16. Dec 16 13:14:47.533986 systemd[1]: Started sshd@16-10.128.0.4:22-139.178.68.195:37954.service - OpenSSH per-connection server daemon (139.178.68.195:37954). Dec 16 13:14:47.836433 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 37954 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:47.838504 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:47.846092 systemd-logind[1471]: New session 17 of user core. Dec 16 13:14:47.852832 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:14:48.193285 sshd[4258]: Connection closed by 139.178.68.195 port 37954 Dec 16 13:14:48.194213 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:48.200051 systemd[1]: sshd@16-10.128.0.4:22-139.178.68.195:37954.service: Deactivated successfully. Dec 16 13:14:48.202697 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:14:48.204049 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:14:48.207367 systemd-logind[1471]: Removed session 17. Dec 16 13:14:48.247667 systemd[1]: Started sshd@17-10.128.0.4:22-139.178.68.195:37964.service - OpenSSH per-connection server daemon (139.178.68.195:37964). Dec 16 13:14:48.550321 sshd[4268]: Accepted publickey for core from 139.178.68.195 port 37964 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:48.552196 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:48.559550 systemd-logind[1471]: New session 18 of user core. Dec 16 13:14:48.569236 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:14:49.454413 sshd[4271]: Connection closed by 139.178.68.195 port 37964 Dec 16 13:14:49.457814 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:49.469140 systemd[1]: sshd@17-10.128.0.4:22-139.178.68.195:37964.service: Deactivated successfully. Dec 16 13:14:49.473980 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:14:49.475333 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:14:49.477481 systemd-logind[1471]: Removed session 18. Dec 16 13:14:49.509099 systemd[1]: Started sshd@18-10.128.0.4:22-139.178.68.195:37980.service - OpenSSH per-connection server daemon (139.178.68.195:37980). Dec 16 13:14:49.814761 sshd[4286]: Accepted publickey for core from 139.178.68.195 port 37980 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:49.817158 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:49.829336 systemd-logind[1471]: New session 19 of user core. Dec 16 13:14:49.833806 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:14:50.250418 sshd[4289]: Connection closed by 139.178.68.195 port 37980 Dec 16 13:14:50.251408 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:50.257870 systemd[1]: sshd@18-10.128.0.4:22-139.178.68.195:37980.service: Deactivated successfully. Dec 16 13:14:50.261990 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:14:50.263293 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:14:50.266064 systemd-logind[1471]: Removed session 19. Dec 16 13:14:50.306039 systemd[1]: Started sshd@19-10.128.0.4:22-139.178.68.195:42222.service - OpenSSH per-connection server daemon (139.178.68.195:42222). Dec 16 13:14:50.620765 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 42222 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:50.622366 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:50.628679 systemd-logind[1471]: New session 20 of user core. Dec 16 13:14:50.640797 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:14:50.909528 sshd[4304]: Connection closed by 139.178.68.195 port 42222 Dec 16 13:14:50.910319 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:50.916920 systemd[1]: sshd@19-10.128.0.4:22-139.178.68.195:42222.service: Deactivated successfully. Dec 16 13:14:50.920289 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:14:50.922223 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:14:50.924617 systemd-logind[1471]: Removed session 20. Dec 16 13:14:55.963496 systemd[1]: Started sshd@20-10.128.0.4:22-139.178.68.195:42236.service - OpenSSH per-connection server daemon (139.178.68.195:42236). Dec 16 13:14:56.269712 sshd[4319]: Accepted publickey for core from 139.178.68.195 port 42236 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:14:56.271649 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:56.278677 systemd-logind[1471]: New session 21 of user core. Dec 16 13:14:56.283775 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:14:56.555641 sshd[4322]: Connection closed by 139.178.68.195 port 42236 Dec 16 13:14:56.557085 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:56.563080 systemd[1]: sshd@20-10.128.0.4:22-139.178.68.195:42236.service: Deactivated successfully. Dec 16 13:14:56.566258 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:14:56.567730 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:14:56.570503 systemd-logind[1471]: Removed session 21. Dec 16 13:15:01.609684 systemd[1]: Started sshd@21-10.128.0.4:22-139.178.68.195:40772.service - OpenSSH per-connection server daemon (139.178.68.195:40772). Dec 16 13:15:01.916415 sshd[4336]: Accepted publickey for core from 139.178.68.195 port 40772 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:01.918152 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:01.925874 systemd-logind[1471]: New session 22 of user core. Dec 16 13:15:01.933868 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:15:02.211834 sshd[4339]: Connection closed by 139.178.68.195 port 40772 Dec 16 13:15:02.214040 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:02.221544 systemd[1]: sshd@21-10.128.0.4:22-139.178.68.195:40772.service: Deactivated successfully. Dec 16 13:15:02.224791 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:15:02.226127 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:15:02.228855 systemd-logind[1471]: Removed session 22. Dec 16 13:15:07.272582 systemd[1]: Started sshd@22-10.128.0.4:22-139.178.68.195:40782.service - OpenSSH per-connection server daemon (139.178.68.195:40782). Dec 16 13:15:07.586948 sshd[4352]: Accepted publickey for core from 139.178.68.195 port 40782 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:07.588493 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:07.595709 systemd-logind[1471]: New session 23 of user core. Dec 16 13:15:07.603811 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:15:07.873100 sshd[4355]: Connection closed by 139.178.68.195 port 40782 Dec 16 13:15:07.873950 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:07.880203 systemd[1]: sshd@22-10.128.0.4:22-139.178.68.195:40782.service: Deactivated successfully. Dec 16 13:15:07.883723 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:15:07.885480 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:15:07.887635 systemd-logind[1471]: Removed session 23. Dec 16 13:15:07.933277 systemd[1]: Started sshd@23-10.128.0.4:22-139.178.68.195:40798.service - OpenSSH per-connection server daemon (139.178.68.195:40798). Dec 16 13:15:08.242636 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 40798 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:08.244348 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:08.251717 systemd-logind[1471]: New session 24 of user core. Dec 16 13:15:08.260805 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:15:09.996638 containerd[1496]: time="2025-12-16T13:15:09.995676267Z" level=info msg="StopContainer for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" with timeout 30 (s)" Dec 16 13:15:09.999030 containerd[1496]: time="2025-12-16T13:15:09.998920519Z" level=info msg="Stop container \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" with signal terminated" Dec 16 13:15:10.032166 systemd[1]: cri-containerd-51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b.scope: Deactivated successfully. Dec 16 13:15:10.034108 containerd[1496]: time="2025-12-16T13:15:10.034028699Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:15:10.038998 containerd[1496]: time="2025-12-16T13:15:10.038811204Z" level=info msg="received container exit event container_id:\"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" id:\"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" pid:3181 exited_at:{seconds:1765890910 nanos:38414217}" Dec 16 13:15:10.052810 containerd[1496]: time="2025-12-16T13:15:10.052526152Z" level=info msg="StopContainer for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" with timeout 2 (s)" Dec 16 13:15:10.053208 containerd[1496]: time="2025-12-16T13:15:10.053177709Z" level=info msg="Stop container \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" with signal terminated" Dec 16 13:15:10.070817 systemd-networkd[1419]: lxc_health: Link DOWN Dec 16 13:15:10.070830 systemd-networkd[1419]: lxc_health: Lost carrier Dec 16 13:15:10.090811 systemd[1]: cri-containerd-4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420.scope: Deactivated successfully. Dec 16 13:15:10.091937 systemd[1]: cri-containerd-4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420.scope: Consumed 9.336s CPU time, 125.9M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:15:10.096965 containerd[1496]: time="2025-12-16T13:15:10.096915600Z" level=info msg="received container exit event container_id:\"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" id:\"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" pid:3436 exited_at:{seconds:1765890910 nanos:95629393}" Dec 16 13:15:10.106391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b-rootfs.mount: Deactivated successfully. Dec 16 13:15:10.129994 containerd[1496]: time="2025-12-16T13:15:10.129925821Z" level=info msg="StopContainer for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" returns successfully" Dec 16 13:15:10.131144 containerd[1496]: time="2025-12-16T13:15:10.131098778Z" level=info msg="StopPodSandbox for \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\"" Dec 16 13:15:10.131431 containerd[1496]: time="2025-12-16T13:15:10.131232786Z" level=info msg="Container to stop \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:10.154197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420-rootfs.mount: Deactivated successfully. Dec 16 13:15:10.156774 systemd[1]: cri-containerd-3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0.scope: Deactivated successfully. Dec 16 13:15:10.158665 containerd[1496]: time="2025-12-16T13:15:10.158515307Z" level=info msg="received sandbox exit event container_id:\"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" id:\"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" exit_status:137 exited_at:{seconds:1765890910 nanos:156736841}" monitor_name=podsandbox Dec 16 13:15:10.172902 containerd[1496]: time="2025-12-16T13:15:10.172830606Z" level=info msg="StopContainer for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" returns successfully" Dec 16 13:15:10.174747 containerd[1496]: time="2025-12-16T13:15:10.173685079Z" level=info msg="StopPodSandbox for \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\"" Dec 16 13:15:10.174747 containerd[1496]: time="2025-12-16T13:15:10.173837199Z" level=info msg="Container to stop \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:10.174747 containerd[1496]: time="2025-12-16T13:15:10.173859284Z" level=info msg="Container to stop \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:10.174747 containerd[1496]: time="2025-12-16T13:15:10.173906072Z" level=info msg="Container to stop \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:10.174747 containerd[1496]: time="2025-12-16T13:15:10.173924799Z" level=info msg="Container to stop \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:10.174747 containerd[1496]: time="2025-12-16T13:15:10.173938367Z" level=info msg="Container to stop \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:15:10.192311 systemd[1]: cri-containerd-c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21.scope: Deactivated successfully. Dec 16 13:15:10.194982 containerd[1496]: time="2025-12-16T13:15:10.194933721Z" level=info msg="received sandbox exit event container_id:\"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" id:\"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" exit_status:137 exited_at:{seconds:1765890910 nanos:194229319}" monitor_name=podsandbox Dec 16 13:15:10.215512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0-rootfs.mount: Deactivated successfully. Dec 16 13:15:10.217714 containerd[1496]: time="2025-12-16T13:15:10.217148142Z" level=info msg="shim disconnected" id=3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0 namespace=k8s.io Dec 16 13:15:10.218330 containerd[1496]: time="2025-12-16T13:15:10.217896760Z" level=warning msg="cleaning up after shim disconnected" id=3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0 namespace=k8s.io Dec 16 13:15:10.218330 containerd[1496]: time="2025-12-16T13:15:10.217921858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:15:10.250575 containerd[1496]: time="2025-12-16T13:15:10.250224600Z" level=info msg="received sandbox container exit event sandbox_id:\"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" exit_status:137 exited_at:{seconds:1765890910 nanos:156736841}" monitor_name=criService Dec 16 13:15:10.252280 containerd[1496]: time="2025-12-16T13:15:10.252188906Z" level=info msg="TearDown network for sandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" successfully" Dec 16 13:15:10.252280 containerd[1496]: time="2025-12-16T13:15:10.252225254Z" level=info msg="StopPodSandbox for \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" returns successfully" Dec 16 13:15:10.256889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0-shm.mount: Deactivated successfully. Dec 16 13:15:10.273919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21-rootfs.mount: Deactivated successfully. Dec 16 13:15:10.280086 containerd[1496]: time="2025-12-16T13:15:10.280025652Z" level=info msg="shim disconnected" id=c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21 namespace=k8s.io Dec 16 13:15:10.280086 containerd[1496]: time="2025-12-16T13:15:10.280069212Z" level=warning msg="cleaning up after shim disconnected" id=c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21 namespace=k8s.io Dec 16 13:15:10.280696 containerd[1496]: time="2025-12-16T13:15:10.280082995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:15:10.307129 containerd[1496]: time="2025-12-16T13:15:10.306945751Z" level=info msg="received sandbox container exit event sandbox_id:\"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" exit_status:137 exited_at:{seconds:1765890910 nanos:194229319}" monitor_name=criService Dec 16 13:15:10.307958 containerd[1496]: time="2025-12-16T13:15:10.307479958Z" level=info msg="TearDown network for sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" successfully" Dec 16 13:15:10.307958 containerd[1496]: time="2025-12-16T13:15:10.307508553Z" level=info msg="StopPodSandbox for \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" returns successfully" Dec 16 13:15:10.375770 kubelet[2766]: I1216 13:15:10.375714 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-xtables-lock\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.375770 kubelet[2766]: I1216 13:15:10.375779 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cni-path\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.376948 kubelet[2766]: I1216 13:15:10.375810 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6sgs\" (UniqueName: \"kubernetes.io/projected/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-kube-api-access-w6sgs\") pod \"8636be1b-1d1b-4cf3-870b-fc7fff7b5169\" (UID: \"8636be1b-1d1b-4cf3-870b-fc7fff7b5169\") " Dec 16 13:15:10.376948 kubelet[2766]: I1216 13:15:10.375834 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-hostproc\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.376948 kubelet[2766]: I1216 13:15:10.375854 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-etc-cni-netd\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.376948 kubelet[2766]: I1216 13:15:10.375881 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-cilium-config-path\") pod \"8636be1b-1d1b-4cf3-870b-fc7fff7b5169\" (UID: \"8636be1b-1d1b-4cf3-870b-fc7fff7b5169\") " Dec 16 13:15:10.376948 kubelet[2766]: I1216 13:15:10.375910 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-run\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.376948 kubelet[2766]: I1216 13:15:10.375935 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/def05b8a-2050-4643-9742-8d47acffc818-clustermesh-secrets\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377259 kubelet[2766]: I1216 13:15:10.375957 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-bpf-maps\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377259 kubelet[2766]: I1216 13:15:10.375985 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-net\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377259 kubelet[2766]: I1216 13:15:10.376007 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-lib-modules\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377259 kubelet[2766]: I1216 13:15:10.376033 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-hubble-tls\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377259 kubelet[2766]: I1216 13:15:10.376057 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-cgroup\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377259 kubelet[2766]: I1216 13:15:10.376083 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmbj\" (UniqueName: \"kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-kube-api-access-pcmbj\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377552 kubelet[2766]: I1216 13:15:10.376110 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-kernel\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.377552 kubelet[2766]: I1216 13:15:10.376138 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/def05b8a-2050-4643-9742-8d47acffc818-cilium-config-path\") pod \"def05b8a-2050-4643-9742-8d47acffc818\" (UID: \"def05b8a-2050-4643-9742-8d47acffc818\") " Dec 16 13:15:10.380785 kubelet[2766]: I1216 13:15:10.375876 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.380888 kubelet[2766]: I1216 13:15:10.380083 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-hostproc" (OuterVolumeSpecName: "hostproc") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.380888 kubelet[2766]: I1216 13:15:10.380141 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.380888 kubelet[2766]: I1216 13:15:10.380716 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cni-path" (OuterVolumeSpecName: "cni-path") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.380888 kubelet[2766]: I1216 13:15:10.380746 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.380888 kubelet[2766]: I1216 13:15:10.380840 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.381785 kubelet[2766]: I1216 13:15:10.381741 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.383791 kubelet[2766]: I1216 13:15:10.383674 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.384071 kubelet[2766]: I1216 13:15:10.384031 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.391783 kubelet[2766]: I1216 13:15:10.391735 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:15:10.394618 kubelet[2766]: I1216 13:15:10.394438 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/def05b8a-2050-4643-9742-8d47acffc818-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:15:10.394850 kubelet[2766]: I1216 13:15:10.394820 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-kube-api-access-pcmbj" (OuterVolumeSpecName: "kube-api-access-pcmbj") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "kube-api-access-pcmbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:10.394988 kubelet[2766]: I1216 13:15:10.394838 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-kube-api-access-w6sgs" (OuterVolumeSpecName: "kube-api-access-w6sgs") pod "8636be1b-1d1b-4cf3-870b-fc7fff7b5169" (UID: "8636be1b-1d1b-4cf3-870b-fc7fff7b5169"). InnerVolumeSpecName "kube-api-access-w6sgs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:10.396250 kubelet[2766]: I1216 13:15:10.396200 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:10.397063 kubelet[2766]: I1216 13:15:10.397008 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def05b8a-2050-4643-9742-8d47acffc818-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "def05b8a-2050-4643-9742-8d47acffc818" (UID: "def05b8a-2050-4643-9742-8d47acffc818"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:15:10.397875 kubelet[2766]: I1216 13:15:10.397845 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8636be1b-1d1b-4cf3-870b-fc7fff7b5169" (UID: "8636be1b-1d1b-4cf3-870b-fc7fff7b5169"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:15:10.476846 kubelet[2766]: I1216 13:15:10.476795 2766 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-hubble-tls\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.476846 kubelet[2766]: I1216 13:15:10.476839 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-cgroup\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.476846 kubelet[2766]: I1216 13:15:10.476855 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pcmbj\" (UniqueName: \"kubernetes.io/projected/def05b8a-2050-4643-9742-8d47acffc818-kube-api-access-pcmbj\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.476846 kubelet[2766]: I1216 13:15:10.476872 2766 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-kernel\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476888 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/def05b8a-2050-4643-9742-8d47acffc818-cilium-config-path\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476902 2766 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-xtables-lock\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476916 2766 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cni-path\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476930 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6sgs\" (UniqueName: \"kubernetes.io/projected/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-kube-api-access-w6sgs\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476943 2766 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-hostproc\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476960 2766 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-etc-cni-netd\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477171 kubelet[2766]: I1216 13:15:10.476973 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8636be1b-1d1b-4cf3-870b-fc7fff7b5169-cilium-config-path\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477379 kubelet[2766]: I1216 13:15:10.476988 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-cilium-run\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477379 kubelet[2766]: I1216 13:15:10.477005 2766 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/def05b8a-2050-4643-9742-8d47acffc818-clustermesh-secrets\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477379 kubelet[2766]: I1216 13:15:10.477019 2766 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-bpf-maps\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477379 kubelet[2766]: I1216 13:15:10.477033 2766 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-host-proc-sys-net\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.477379 kubelet[2766]: I1216 13:15:10.477048 2766 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/def05b8a-2050-4643-9742-8d47acffc818-lib-modules\") on node \"ci-4459-2-2-5a4c9578efbd9e4e821d.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:15:10.597437 kubelet[2766]: I1216 13:15:10.596919 2766 scope.go:117] "RemoveContainer" containerID="4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420" Dec 16 13:15:10.604539 containerd[1496]: time="2025-12-16T13:15:10.604485669Z" level=info msg="RemoveContainer for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\"" Dec 16 13:15:10.620007 systemd[1]: Removed slice kubepods-besteffort-pod8636be1b_1d1b_4cf3_870b_fc7fff7b5169.slice - libcontainer container kubepods-besteffort-pod8636be1b_1d1b_4cf3_870b_fc7fff7b5169.slice. Dec 16 13:15:10.620396 containerd[1496]: time="2025-12-16T13:15:10.620157343Z" level=info msg="RemoveContainer for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" returns successfully" Dec 16 13:15:10.621449 kubelet[2766]: I1216 13:15:10.621165 2766 scope.go:117] "RemoveContainer" containerID="773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc" Dec 16 13:15:10.625503 systemd[1]: Removed slice kubepods-burstable-poddef05b8a_2050_4643_9742_8d47acffc818.slice - libcontainer container kubepods-burstable-poddef05b8a_2050_4643_9742_8d47acffc818.slice. Dec 16 13:15:10.625951 systemd[1]: kubepods-burstable-poddef05b8a_2050_4643_9742_8d47acffc818.slice: Consumed 9.484s CPU time, 126.3M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:15:10.634453 containerd[1496]: time="2025-12-16T13:15:10.633514624Z" level=info msg="RemoveContainer for \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\"" Dec 16 13:15:10.663879 containerd[1496]: time="2025-12-16T13:15:10.663565655Z" level=info msg="RemoveContainer for \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\" returns successfully" Dec 16 13:15:10.664566 kubelet[2766]: I1216 13:15:10.664500 2766 scope.go:117] "RemoveContainer" containerID="674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6" Dec 16 13:15:10.676858 containerd[1496]: time="2025-12-16T13:15:10.676723606Z" level=info msg="RemoveContainer for \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\"" Dec 16 13:15:10.686218 containerd[1496]: time="2025-12-16T13:15:10.686097134Z" level=info msg="RemoveContainer for \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\" returns successfully" Dec 16 13:15:10.686934 kubelet[2766]: I1216 13:15:10.686875 2766 scope.go:117] "RemoveContainer" containerID="e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd" Dec 16 13:15:10.689826 containerd[1496]: time="2025-12-16T13:15:10.689771024Z" level=info msg="RemoveContainer for \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\"" Dec 16 13:15:10.698907 containerd[1496]: time="2025-12-16T13:15:10.698715657Z" level=info msg="RemoveContainer for \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\" returns successfully" Dec 16 13:15:10.699402 kubelet[2766]: I1216 13:15:10.699234 2766 scope.go:117] "RemoveContainer" containerID="2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7" Dec 16 13:15:10.702348 containerd[1496]: time="2025-12-16T13:15:10.702260814Z" level=info msg="RemoveContainer for \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\"" Dec 16 13:15:10.709087 containerd[1496]: time="2025-12-16T13:15:10.709029751Z" level=info msg="RemoveContainer for \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\" returns successfully" Dec 16 13:15:10.709463 kubelet[2766]: I1216 13:15:10.709427 2766 scope.go:117] "RemoveContainer" containerID="4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420" Dec 16 13:15:10.710068 containerd[1496]: time="2025-12-16T13:15:10.710007439Z" level=error msg="ContainerStatus for \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\": not found" Dec 16 13:15:10.710383 kubelet[2766]: E1216 13:15:10.710268 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\": not found" containerID="4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420" Dec 16 13:15:10.710496 kubelet[2766]: I1216 13:15:10.710368 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420"} err="failed to get container status \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f0e8b754b590bca97ef176220e4703a5c7e00dc7048fb7c3682bea00dcf2420\": not found" Dec 16 13:15:10.710496 kubelet[2766]: I1216 13:15:10.710432 2766 scope.go:117] "RemoveContainer" containerID="773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc" Dec 16 13:15:10.710735 containerd[1496]: time="2025-12-16T13:15:10.710692470Z" level=error msg="ContainerStatus for \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\": not found" Dec 16 13:15:10.710994 kubelet[2766]: E1216 13:15:10.710952 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\": not found" containerID="773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc" Dec 16 13:15:10.711094 kubelet[2766]: I1216 13:15:10.710999 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc"} err="failed to get container status \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"773727bd8bfbd07f4a738c6e4164ce21bf73b6df6db3b90a7410be76b4d161cc\": not found" Dec 16 13:15:10.711094 kubelet[2766]: I1216 13:15:10.711029 2766 scope.go:117] "RemoveContainer" containerID="674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6" Dec 16 13:15:10.711683 containerd[1496]: time="2025-12-16T13:15:10.711555141Z" level=error msg="ContainerStatus for \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\": not found" Dec 16 13:15:10.711898 kubelet[2766]: E1216 13:15:10.711844 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\": not found" containerID="674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6" Dec 16 13:15:10.712057 kubelet[2766]: I1216 13:15:10.711889 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6"} err="failed to get container status \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"674d4bb5cb4f78e64c57804df682d4d1464a8dde884894dc1b798f91452eebc6\": not found" Dec 16 13:15:10.712057 kubelet[2766]: I1216 13:15:10.711917 2766 scope.go:117] "RemoveContainer" containerID="e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd" Dec 16 13:15:10.712269 containerd[1496]: time="2025-12-16T13:15:10.712186764Z" level=error msg="ContainerStatus for \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\": not found" Dec 16 13:15:10.712573 kubelet[2766]: E1216 13:15:10.712544 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\": not found" containerID="e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd" Dec 16 13:15:10.712701 kubelet[2766]: I1216 13:15:10.712598 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd"} err="failed to get container status \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7b2d867bb18e7fae5aa14a215ce8aec2976902b0ef4488d5afd52fb79d73bcd\": not found" Dec 16 13:15:10.712701 kubelet[2766]: I1216 13:15:10.712662 2766 scope.go:117] "RemoveContainer" containerID="2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7" Dec 16 13:15:10.713047 containerd[1496]: time="2025-12-16T13:15:10.712967116Z" level=error msg="ContainerStatus for \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\": not found" Dec 16 13:15:10.713383 kubelet[2766]: E1216 13:15:10.713166 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\": not found" containerID="2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7" Dec 16 13:15:10.713383 kubelet[2766]: I1216 13:15:10.713215 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7"} err="failed to get container status \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a2673b45b3a8593987a637bfc8b999efa1a583afb5588725d1254b2447e9bf7\": not found" Dec 16 13:15:10.713383 kubelet[2766]: I1216 13:15:10.713242 2766 scope.go:117] "RemoveContainer" containerID="51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b" Dec 16 13:15:10.716729 containerd[1496]: time="2025-12-16T13:15:10.716576789Z" level=info msg="RemoveContainer for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\"" Dec 16 13:15:10.724710 containerd[1496]: time="2025-12-16T13:15:10.724638469Z" level=info msg="RemoveContainer for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" returns successfully" Dec 16 13:15:10.725054 kubelet[2766]: I1216 13:15:10.724992 2766 scope.go:117] "RemoveContainer" containerID="51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b" Dec 16 13:15:10.725473 containerd[1496]: time="2025-12-16T13:15:10.725397491Z" level=error msg="ContainerStatus for \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\": not found" Dec 16 13:15:10.725807 kubelet[2766]: E1216 13:15:10.725727 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\": not found" containerID="51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b" Dec 16 13:15:10.725807 kubelet[2766]: I1216 13:15:10.725772 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b"} err="failed to get container status \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"51f8b60ca6ab219aed6f2ccb4396b5ba3ba80c8056b33d05ded44d7128c13c3b\": not found" Dec 16 13:15:11.105130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21-shm.mount: Deactivated successfully. Dec 16 13:15:11.105291 systemd[1]: var-lib-kubelet-pods-def05b8a\x2d2050\x2d4643\x2d9742\x2d8d47acffc818-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpcmbj.mount: Deactivated successfully. Dec 16 13:15:11.105423 systemd[1]: var-lib-kubelet-pods-8636be1b\x2d1d1b\x2d4cf3\x2d870b\x2dfc7fff7b5169-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw6sgs.mount: Deactivated successfully. Dec 16 13:15:11.105539 systemd[1]: var-lib-kubelet-pods-def05b8a\x2d2050\x2d4643\x2d9742\x2d8d47acffc818-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:15:11.105673 systemd[1]: var-lib-kubelet-pods-def05b8a\x2d2050\x2d4643\x2d9742\x2d8d47acffc818-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:15:11.138202 kubelet[2766]: I1216 13:15:11.138129 2766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8636be1b-1d1b-4cf3-870b-fc7fff7b5169" path="/var/lib/kubelet/pods/8636be1b-1d1b-4cf3-870b-fc7fff7b5169/volumes" Dec 16 13:15:11.139144 kubelet[2766]: I1216 13:15:11.139052 2766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="def05b8a-2050-4643-9742-8d47acffc818" path="/var/lib/kubelet/pods/def05b8a-2050-4643-9742-8d47acffc818/volumes" Dec 16 13:15:11.940352 sshd[4370]: Connection closed by 139.178.68.195 port 40798 Dec 16 13:15:11.941405 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:11.952794 systemd[1]: sshd@23-10.128.0.4:22-139.178.68.195:40798.service: Deactivated successfully. Dec 16 13:15:11.958776 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:15:11.961531 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:15:11.964976 systemd-logind[1471]: Removed session 24. Dec 16 13:15:11.995895 systemd[1]: Started sshd@24-10.128.0.4:22-139.178.68.195:57420.service - OpenSSH per-connection server daemon (139.178.68.195:57420). Dec 16 13:15:12.299114 sshd[4519]: Accepted publickey for core from 139.178.68.195 port 57420 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:12.300939 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:12.308666 systemd-logind[1471]: New session 25 of user core. Dec 16 13:15:12.323916 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:15:12.881243 ntpd[1664]: Deleting 10 lxc_health, [fe80::2031:4dff:fed0:ddf1%8]:123, stats: received=0, sent=0, dropped=0, active_time=80 secs Dec 16 13:15:12.881940 ntpd[1664]: 16 Dec 13:15:12 ntpd[1664]: Deleting 10 lxc_health, [fe80::2031:4dff:fed0:ddf1%8]:123, stats: received=0, sent=0, dropped=0, active_time=80 secs Dec 16 13:15:13.048312 systemd[1]: Created slice kubepods-burstable-pod9a68c979_5871_4dd3_a204_0730ae0ca147.slice - libcontainer container kubepods-burstable-pod9a68c979_5871_4dd3_a204_0730ae0ca147.slice. Dec 16 13:15:13.052290 sshd[4522]: Connection closed by 139.178.68.195 port 57420 Dec 16 13:15:13.051305 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:13.064946 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:15:13.066536 systemd[1]: sshd@24-10.128.0.4:22-139.178.68.195:57420.service: Deactivated successfully. Dec 16 13:15:13.073143 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:15:13.080165 systemd-logind[1471]: Removed session 25. Dec 16 13:15:13.098857 kubelet[2766]: I1216 13:15:13.098816 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a68c979-5871-4dd3-a204-0730ae0ca147-cilium-ipsec-secrets\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.099723 kubelet[2766]: I1216 13:15:13.099458 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a68c979-5871-4dd3-a204-0730ae0ca147-hubble-tls\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.099723 kubelet[2766]: I1216 13:15:13.099554 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-hostproc\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.099723 kubelet[2766]: I1216 13:15:13.099633 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a68c979-5871-4dd3-a204-0730ae0ca147-clustermesh-secrets\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100128 kubelet[2766]: I1216 13:15:13.099664 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a68c979-5871-4dd3-a204-0730ae0ca147-cilium-config-path\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100128 kubelet[2766]: I1216 13:15:13.099998 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-host-proc-sys-kernel\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100128 kubelet[2766]: I1216 13:15:13.100067 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-bpf-maps\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100128 kubelet[2766]: I1216 13:15:13.100093 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-cni-path\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100603 kubelet[2766]: I1216 13:15:13.100403 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-lib-modules\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100603 kubelet[2766]: I1216 13:15:13.100468 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzklk\" (UniqueName: \"kubernetes.io/projected/9a68c979-5871-4dd3-a204-0730ae0ca147-kube-api-access-fzklk\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100603 kubelet[2766]: I1216 13:15:13.100535 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-cilium-cgroup\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.100603 kubelet[2766]: I1216 13:15:13.100560 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-cilium-run\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.102094 kubelet[2766]: I1216 13:15:13.101889 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-etc-cni-netd\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.102094 kubelet[2766]: I1216 13:15:13.101983 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-host-proc-sys-net\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.102094 kubelet[2766]: I1216 13:15:13.102012 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a68c979-5871-4dd3-a204-0730ae0ca147-xtables-lock\") pod \"cilium-5nx4b\" (UID: \"9a68c979-5871-4dd3-a204-0730ae0ca147\") " pod="kube-system/cilium-5nx4b" Dec 16 13:15:13.113703 systemd[1]: Started sshd@25-10.128.0.4:22-139.178.68.195:57432.service - OpenSSH per-connection server daemon (139.178.68.195:57432). Dec 16 13:15:13.358713 containerd[1496]: time="2025-12-16T13:15:13.358665016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nx4b,Uid:9a68c979-5871-4dd3-a204-0730ae0ca147,Namespace:kube-system,Attempt:0,}" Dec 16 13:15:13.392051 containerd[1496]: time="2025-12-16T13:15:13.391937811Z" level=info msg="connecting to shim 1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea" address="unix:///run/containerd/s/df031e046dd9b49605d121af74212a79cec3995d72ef9d2f1045c6dd0fbd9311" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:13.432893 systemd[1]: Started cri-containerd-1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea.scope - libcontainer container 1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea. Dec 16 13:15:13.446731 sshd[4533]: Accepted publickey for core from 139.178.68.195 port 57432 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:13.449563 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:13.468758 systemd-logind[1471]: New session 26 of user core. Dec 16 13:15:13.470957 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:15:13.496400 containerd[1496]: time="2025-12-16T13:15:13.496348768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nx4b,Uid:9a68c979-5871-4dd3-a204-0730ae0ca147,Namespace:kube-system,Attempt:0,} returns sandbox id \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\"" Dec 16 13:15:13.505894 containerd[1496]: time="2025-12-16T13:15:13.505839316Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:15:13.516033 containerd[1496]: time="2025-12-16T13:15:13.515369193Z" level=info msg="Container 525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:13.521972 containerd[1496]: time="2025-12-16T13:15:13.521933605Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b\"" Dec 16 13:15:13.522972 containerd[1496]: time="2025-12-16T13:15:13.522922484Z" level=info msg="StartContainer for \"525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b\"" Dec 16 13:15:13.524642 containerd[1496]: time="2025-12-16T13:15:13.524580901Z" level=info msg="connecting to shim 525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b" address="unix:///run/containerd/s/df031e046dd9b49605d121af74212a79cec3995d72ef9d2f1045c6dd0fbd9311" protocol=ttrpc version=3 Dec 16 13:15:13.552851 systemd[1]: Started cri-containerd-525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b.scope - libcontainer container 525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b. Dec 16 13:15:13.596523 containerd[1496]: time="2025-12-16T13:15:13.596476321Z" level=info msg="StartContainer for \"525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b\" returns successfully" Dec 16 13:15:13.608965 systemd[1]: cri-containerd-525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b.scope: Deactivated successfully. Dec 16 13:15:13.619206 containerd[1496]: time="2025-12-16T13:15:13.619087213Z" level=info msg="received container exit event container_id:\"525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b\" id:\"525ab722b202087653e169f0a1f24757472512e4406e60e0b2f25dd8a1bdf71b\" pid:4601 exited_at:{seconds:1765890913 nanos:618030681}" Dec 16 13:15:13.659261 sshd[4583]: Connection closed by 139.178.68.195 port 57432 Dec 16 13:15:13.660919 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:13.672049 systemd[1]: sshd@25-10.128.0.4:22-139.178.68.195:57432.service: Deactivated successfully. Dec 16 13:15:13.676169 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:15:13.678682 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:15:13.681353 systemd-logind[1471]: Removed session 26. Dec 16 13:15:13.717928 systemd[1]: Started sshd@26-10.128.0.4:22-139.178.68.195:57436.service - OpenSSH per-connection server daemon (139.178.68.195:57436). Dec 16 13:15:14.023059 sshd[4640]: Accepted publickey for core from 139.178.68.195 port 57436 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:14.024753 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:14.031346 systemd-logind[1471]: New session 27 of user core. Dec 16 13:15:14.037825 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:15:14.339934 kubelet[2766]: E1216 13:15:14.339735 2766 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:15:14.634168 containerd[1496]: time="2025-12-16T13:15:14.634029422Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:15:14.646577 containerd[1496]: time="2025-12-16T13:15:14.646302230Z" level=info msg="Container c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:14.659381 containerd[1496]: time="2025-12-16T13:15:14.659317262Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4\"" Dec 16 13:15:14.659971 containerd[1496]: time="2025-12-16T13:15:14.659932028Z" level=info msg="StartContainer for \"c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4\"" Dec 16 13:15:14.664864 containerd[1496]: time="2025-12-16T13:15:14.664811309Z" level=info msg="connecting to shim c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4" address="unix:///run/containerd/s/df031e046dd9b49605d121af74212a79cec3995d72ef9d2f1045c6dd0fbd9311" protocol=ttrpc version=3 Dec 16 13:15:14.706846 systemd[1]: Started cri-containerd-c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4.scope - libcontainer container c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4. Dec 16 13:15:14.758563 containerd[1496]: time="2025-12-16T13:15:14.758500600Z" level=info msg="StartContainer for \"c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4\" returns successfully" Dec 16 13:15:14.768003 systemd[1]: cri-containerd-c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4.scope: Deactivated successfully. Dec 16 13:15:14.770374 containerd[1496]: time="2025-12-16T13:15:14.770317367Z" level=info msg="received container exit event container_id:\"c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4\" id:\"c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4\" pid:4663 exited_at:{seconds:1765890914 nanos:769825837}" Dec 16 13:15:14.803075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c163eef1e0f5387fc40d3d11d01810f33ee9ec59fb1eba94148b240fc2c8aed4-rootfs.mount: Deactivated successfully. Dec 16 13:15:15.640182 containerd[1496]: time="2025-12-16T13:15:15.639581037Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:15:15.658963 containerd[1496]: time="2025-12-16T13:15:15.658892533Z" level=info msg="Container 5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:15.671114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065459597.mount: Deactivated successfully. Dec 16 13:15:15.681980 containerd[1496]: time="2025-12-16T13:15:15.681925741Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329\"" Dec 16 13:15:15.682813 containerd[1496]: time="2025-12-16T13:15:15.682744119Z" level=info msg="StartContainer for \"5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329\"" Dec 16 13:15:15.687244 containerd[1496]: time="2025-12-16T13:15:15.687202269Z" level=info msg="connecting to shim 5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329" address="unix:///run/containerd/s/df031e046dd9b49605d121af74212a79cec3995d72ef9d2f1045c6dd0fbd9311" protocol=ttrpc version=3 Dec 16 13:15:15.720855 systemd[1]: Started cri-containerd-5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329.scope - libcontainer container 5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329. Dec 16 13:15:15.825045 containerd[1496]: time="2025-12-16T13:15:15.824965231Z" level=info msg="StartContainer for \"5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329\" returns successfully" Dec 16 13:15:15.829485 systemd[1]: cri-containerd-5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329.scope: Deactivated successfully. Dec 16 13:15:15.832916 containerd[1496]: time="2025-12-16T13:15:15.832867236Z" level=info msg="received container exit event container_id:\"5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329\" id:\"5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329\" pid:4708 exited_at:{seconds:1765890915 nanos:832505013}" Dec 16 13:15:15.867926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5611bc70f5b52fc590f0c99a4cd190062ff10e461731ea3cbb3851682882d329-rootfs.mount: Deactivated successfully. Dec 16 13:15:16.652635 containerd[1496]: time="2025-12-16T13:15:16.651195910Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:15:16.667643 containerd[1496]: time="2025-12-16T13:15:16.667568635Z" level=info msg="Container 1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:16.680431 containerd[1496]: time="2025-12-16T13:15:16.680373198Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf\"" Dec 16 13:15:16.681008 containerd[1496]: time="2025-12-16T13:15:16.680972729Z" level=info msg="StartContainer for \"1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf\"" Dec 16 13:15:16.684491 containerd[1496]: time="2025-12-16T13:15:16.684359564Z" level=info msg="connecting to shim 1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf" address="unix:///run/containerd/s/df031e046dd9b49605d121af74212a79cec3995d72ef9d2f1045c6dd0fbd9311" protocol=ttrpc version=3 Dec 16 13:15:16.729934 systemd[1]: Started cri-containerd-1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf.scope - libcontainer container 1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf. Dec 16 13:15:16.785305 systemd[1]: cri-containerd-1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf.scope: Deactivated successfully. Dec 16 13:15:16.788241 containerd[1496]: time="2025-12-16T13:15:16.787400421Z" level=info msg="received container exit event container_id:\"1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf\" id:\"1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf\" pid:4749 exited_at:{seconds:1765890916 nanos:786450293}" Dec 16 13:15:16.800936 containerd[1496]: time="2025-12-16T13:15:16.800887667Z" level=info msg="StartContainer for \"1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf\" returns successfully" Dec 16 13:15:16.830255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ac556da78b1d9d74030ae3d8b1327658c0944bbf2ed74bebd661b04892b2dbf-rootfs.mount: Deactivated successfully. Dec 16 13:15:17.658812 containerd[1496]: time="2025-12-16T13:15:17.658729211Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:15:17.683631 containerd[1496]: time="2025-12-16T13:15:17.681938606Z" level=info msg="Container 5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:17.694759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079754719.mount: Deactivated successfully. Dec 16 13:15:17.700036 containerd[1496]: time="2025-12-16T13:15:17.699945787Z" level=info msg="CreateContainer within sandbox \"1debabb780e0dddd14c26aade6eea0b1a73d03aa37fa2caa6983538986a313ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714\"" Dec 16 13:15:17.700857 containerd[1496]: time="2025-12-16T13:15:17.700808774Z" level=info msg="StartContainer for \"5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714\"" Dec 16 13:15:17.702759 containerd[1496]: time="2025-12-16T13:15:17.702720791Z" level=info msg="connecting to shim 5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714" address="unix:///run/containerd/s/df031e046dd9b49605d121af74212a79cec3995d72ef9d2f1045c6dd0fbd9311" protocol=ttrpc version=3 Dec 16 13:15:17.739939 systemd[1]: Started cri-containerd-5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714.scope - libcontainer container 5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714. Dec 16 13:15:17.811163 containerd[1496]: time="2025-12-16T13:15:17.811103173Z" level=info msg="StartContainer for \"5208ea6d9e06c34b6419fd622039296a595a99a83caddf8434fc2b6827cf8714\" returns successfully" Dec 16 13:15:18.334638 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:15:19.064371 containerd[1496]: time="2025-12-16T13:15:19.064279615Z" level=info msg="StopPodSandbox for \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\"" Dec 16 13:15:19.064968 containerd[1496]: time="2025-12-16T13:15:19.064522246Z" level=info msg="TearDown network for sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" successfully" Dec 16 13:15:19.064968 containerd[1496]: time="2025-12-16T13:15:19.064547585Z" level=info msg="StopPodSandbox for \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" returns successfully" Dec 16 13:15:19.065326 containerd[1496]: time="2025-12-16T13:15:19.065296174Z" level=info msg="RemovePodSandbox for \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\"" Dec 16 13:15:19.065453 containerd[1496]: time="2025-12-16T13:15:19.065336178Z" level=info msg="Forcibly stopping sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\"" Dec 16 13:15:19.065513 containerd[1496]: time="2025-12-16T13:15:19.065446897Z" level=info msg="TearDown network for sandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" successfully" Dec 16 13:15:19.067254 containerd[1496]: time="2025-12-16T13:15:19.067219502Z" level=info msg="Ensure that sandbox c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21 in task-service has been cleanup successfully" Dec 16 13:15:19.071551 containerd[1496]: time="2025-12-16T13:15:19.071506575Z" level=info msg="RemovePodSandbox \"c12a94eb26d158cf672360c965e2ce27d8eefd25383b221fa2971e20187dab21\" returns successfully" Dec 16 13:15:19.071989 containerd[1496]: time="2025-12-16T13:15:19.071941699Z" level=info msg="StopPodSandbox for \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\"" Dec 16 13:15:19.072141 containerd[1496]: time="2025-12-16T13:15:19.072108625Z" level=info msg="TearDown network for sandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" successfully" Dec 16 13:15:19.072141 containerd[1496]: time="2025-12-16T13:15:19.072135663Z" level=info msg="StopPodSandbox for \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" returns successfully" Dec 16 13:15:19.072550 containerd[1496]: time="2025-12-16T13:15:19.072507504Z" level=info msg="RemovePodSandbox for \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\"" Dec 16 13:15:19.072550 containerd[1496]: time="2025-12-16T13:15:19.072546110Z" level=info msg="Forcibly stopping sandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\"" Dec 16 13:15:19.072795 containerd[1496]: time="2025-12-16T13:15:19.072697742Z" level=info msg="TearDown network for sandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" successfully" Dec 16 13:15:19.073968 containerd[1496]: time="2025-12-16T13:15:19.073923846Z" level=info msg="Ensure that sandbox 3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0 in task-service has been cleanup successfully" Dec 16 13:15:19.077577 containerd[1496]: time="2025-12-16T13:15:19.077542463Z" level=info msg="RemovePodSandbox \"3e90b1c4c60757d5eacd71f019c281ba94a7f745dfd5c00441deabdac9649ce0\" returns successfully" Dec 16 13:15:21.744863 systemd-networkd[1419]: lxc_health: Link UP Dec 16 13:15:21.745336 systemd-networkd[1419]: lxc_health: Gained carrier Dec 16 13:15:23.094850 systemd-networkd[1419]: lxc_health: Gained IPv6LL Dec 16 13:15:23.394165 kubelet[2766]: I1216 13:15:23.393722 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5nx4b" podStartSLOduration=10.393698194 podStartE2EDuration="10.393698194s" podCreationTimestamp="2025-12-16 13:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:15:18.683432277 +0000 UTC m=+119.851030614" watchObservedRunningTime="2025-12-16 13:15:23.393698194 +0000 UTC m=+124.561296533" Dec 16 13:15:25.881976 ntpd[1664]: Listen normally on 13 lxc_health [fe80::dc26:52ff:fe8f:3be1%14]:123 Dec 16 13:15:25.882681 ntpd[1664]: 16 Dec 13:15:25 ntpd[1664]: Listen normally on 13 lxc_health [fe80::dc26:52ff:fe8f:3be1%14]:123 Dec 16 13:15:29.567518 sshd[4643]: Connection closed by 139.178.68.195 port 57436 Dec 16 13:15:29.568553 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:29.574939 systemd[1]: sshd@26-10.128.0.4:22-139.178.68.195:57436.service: Deactivated successfully. Dec 16 13:15:29.578431 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:15:29.580104 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:15:29.582651 systemd-logind[1471]: Removed session 27.